Skip to content

Conversation

@ololobus
Copy link
Member

No description provided.

tglsfdc and others added 30 commits November 19, 2024 18:26
In the dim past we figured it was okay to ignore collations
when combining UNION set-operation nodes into a single N-way
UNION operation.  I believe that was fine at the time, but
it stopped being fine when we added nondeterministic collations:
the semantics of distinct-ness are affected by those.  v17 made
it even less fine by allowing per-child sorting operations to
be merged via MergeAppend, although I think we accidentally
avoided any live bug from that.

Add a check that collations match before deciding that two
UNION nodes are equivalent.  I also failed to resist the
temptation to comment plan_union_children() a little better.

Back-patch to all supported branches (v13 now), since they
all have nondeterministic collations.

Discussion: https://postgr.es/m/3605568.1731970579@sss.pgh.pa.us
Ordinarily transformSetOperationTree will collect all UNION/
INTERSECT/EXCEPT steps into the setOperations tree of the topmost
Query, so that leaf queries do not contain any setOperations.
However, it cannot thus flatten a subquery that also contains
WITH, ORDER BY, FOR UPDATE, or LIMIT.  I (tgl) forgot that in
commit 07b4c48 and wrote an assertion in rule deparsing that
a leaf's setOperations would always be empty.

If it were nonempty then we would want to parenthesize the subquery
to ensure that the output represents the setop nesting correctly
(e.g. UNION below INTERSECT had better get parenthesized).  So
rather than just removing the faulty Assert, let's change it into
an additional case to check to decide whether to add parens.  We
don't expect that the additional case will ever fire, but it's
cheap insurance.

Man Zeng and Tom Lane

Discussion: https://postgr.es/m/tencent_7ABF9B1F23B0C77606FC5FE3@qq.com
Apparently this information has been outdated since first committed,
because we adopted a different implementation during development per
reviews and this detail was not updated in the README.

This has been wrong since commit 0ac5ad5 introduced the file in
2013.  Backpatch to all live branches.

Reported-by: Will Mortensen <will@extrahop.com>
Discussion: https://postgr.es/m/CAMpnoC6yEQ=c0Rdq-J7uRedrP7Zo9UMp6VZyP23QMT68n06cvA@mail.gmail.com
It failed to set the archive_command as it desired because of a syntax
problem.  Oversight in commit 90bcc7c.

This bug doesn't cause the test to fail, because the test only checks
pg_rewind's output messages, not the actual outcome (and the outcome in
both cases is that the file is kept, not deleted).  But in either case
the message about the file being kept is there, so it's hard to get
excited about doing much more.

Reported-by: Antonin Houska <ah@cybertec.at>
Author: Alexander Kukushkin <cyberdemn@gmail.com>
Discussion: https://postgr.es/m/7822.1732167825@antos
If the executable's .o files were produced by a compiler (probably gcc)
not using -moutline-atomics, and the corresponding .bc files were
produced by clang using -moutline-atomics (probably by default), then
the generated bitcode functions would have the target attribute
"+outline-atomics", and could fail at runtime when inlined.  If the
target ISA at bitcode generation time was armv8-a (the most conservative
aarch64 target, no LSE), then LLVM IR atomic instructions would generate
calls to functions in libgcc.a or libclang_rt.*.a that switch between
LL/SC and faster LSE instructions depending on a runtime AT_HWCAP check.
Since the corresponding .o files didn't need those functions, they
wouldn't have been included in the executable, and resolution would
fail.

At least Debian and Ubuntu are known to ship gcc and clang compilers
that target armv8-a but differ on the use of outline atomics by default.

Fix, by suppressing the outline atomics attribute in bitcode explicitly.
Inline LL/SC instructions will be generated for atomic operations in
bitcode built for armv8-a.  Only configure scripts are adjusted for now,
because the meson build system doesn't generate bitcode yet.

This doesn't seem to be a new phenomenon, so real cases of functions
using atomics that are inlined by JIT must be rare in the wild given how
long it took for a bug report to arrive.  The reported case could be
reduced to:

postgres=# set jit_inline_above_cost = 0;
SET
postgres=# set jit_above_cost = 0;
SET
postgres=# select pg_last_wal_receive_lsn();
WARNING:  failed to resolve name __aarch64_swp4_acq_rel
FATAL:  fatal llvm error: Program used external function
'__aarch64_swp4_acq_rel' which could not be resolved!

The change doesn't affect non-ARM systems or later target ISAs.

Back-patch to all supported releases.

Reported-by: Alexander Kozhemyakin <a.kozhemyakin@postgrespro.ru>
Discussion: https://postgr.es/m/18610-37bf303f904fede3%40postgresql.org
Previously we checked "for <stdbool.h> that conforms to C99" using
autoconf's AC_HEADER_STDBOOL macro.  We've required C99 since PostgreSQL
12, so the test was redundant, and under C23 it was broken: autoconf
2.69's implementation doesn't understand C23's new empty header (the
macros it's looking for went away, replaced by language keywords).
Later autoconf versions fixed that, but let's just remove the
anachronistic test.

HAVE_STDBOOL_H and HAVE__BOOL will no longer be defined, but they
weren't directly tested in core or likely extensions (except in 11, see
below).  PG_USE_STDBOOL (or USE_STDBOOL in 11 and 12) is still defined
when sizeof(bool) is 1, which should be true on all modern systems.
Otherwise we define our own bool type and values of size 1, which would
fail to compile under C23 as revealed by the broken test.  (We'll
probably clean that dead code up in master, but here we want a minimal
back-patchable change.)

This came to our attention when GCC 15 recently started using using C23
by default and failed to compile the replacement code, as reported by
Sam James and build farm animal alligator.

Back-patch to all supported releases, and then two older versions that
also know about <stdbool.h>, per the recently-out-of-support policy[1].
12 requires C99 so it's much like the supported releases, but 11 only
assumes C89 so it now uses AC_CHECK_HEADERS instead of the overly picky
AC_HEADER_STDBOOL.  (I could find no discussion of which historical
systems had <stdbool.h> but failed the conformance test; if they ever
existed, they surely aren't relevant to that policy's goals.)

[1] https://wiki.postgresql.org/wiki/Committing_checklist#Policies

Reported-by: Sam James <sam@gentoo.org>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org> (master version)
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> (approach)
Discussion: https://www.postgresql.org/message-id/flat/87o72eo9iu.fsf%40gentoo.org
Tcl 9 changed several API functions to take Tcl_Size, which is
ptrdiff_t, instead of int, for 64-bit enablement.  We have to change a
few local variables to be compatible with that.  We also provide a
fallback typedef of Tcl_Size for older Tcl versions.

The affected variables are used for quantities that will not approach
values beyond the range of int, so this doesn't change any
functionality.

Reviewed-by: Tristan Partin <tristan@partin.io>
Discussion: https://www.postgresql.org/message-id/flat/bce0fe54-75b4-438e-b42b-8e84bc7c0e9c%40eisentraut.org
On ARM platforms where the baseline CPU target lacks CRC instructions,
we need to supply a -march flag to persuade the compiler to compile
such instructions.  It turns out that our existing choice of
"-march=armv8-a+crc" has not worked for some time, because recent gcc
will interpret that as selecting software floating point, and then
will spit up if the platform requires hard-float ABI, as most do
nowadays.  The end result was to silently fall back to software CRC,
which isn't very desirable since in practice almost all currently
produced ARM chips do have hardware CRC.

We can fix this by using "-march=armv8-a+crc+simd" to enable the
correct ABI choice.  (This has no impact on the code actually
generated, since neither of the files we compile with this flag
does any floating-point stuff, let alone SIMD.)  Keep the test for
"-march=armv8-a+crc" since that's required for soft-float ABI,
but try that second since most platforms we're likely to build on
use hard-float.

Since this isn't working as-intended on the last several years'
worth of gcc releases, back-patch to all supported branches.

Discussion: https://postgr.es/m/4496616.iHFcN1HehY@portable-bastien
This WARNING appeared because SearchSysCacheLocked1() read
cc_relisshared before catcache initialization, when the field is false
unconditionally.  On the basis of reading false there, it constructed a
locktag as though pg_tablespace weren't relisshared.  Only shared
catalogs could be affected, and only GRANT TABLESPACE was affected in
practice.  SearchSysCacheLocked1() callers use one other shared-relation
syscache, DATABASEOID.  DATABASEOID is initialized by the end of
CheckMyDatabase(), making the problem unreachable for pg_database.

Back-patch to v13 (all supported versions).  This has no known impact
before v16, where ExecGrant_common() first appeared.  Earlier branches
avoid trouble by having a separate ExecGrant_Tablespace() that doesn't
use LOCKTAG_TUPLE.  However, leaving this unfixed in v15 could ensnare a
future back-patch of a SearchSysCacheLocked1() call.

Reported by Aya Iwata.

Discussion: https://postgr.es/m/OS7PR01MB11964507B5548245A7EE54E70EA212@OS7PR01MB11964.jpnprd01.prod.outlook.com
If passed a read-write expanded object pointer, the EEOP_NULLIF
code would hand that same pointer to the equality function
and then (unless equality was reported) also return the same
pointer as its value.  This is no good, because a function that
receives a read-write expanded object pointer is fully entitled
to scribble on or even delete the object, thus corrupting the
NULLIF output.  (This problem is likely unobservable with the
equality functions provided in core Postgres, but it's easy to
demonstrate with one coded in plpgsql.)

To fix, make sure the pointer passed to the equality function
is read-only.  We can still return the original read-write
pointer as the NULLIF result, allowing optimization of later
operations.

Per bug #18722 from Alexander Lakhin.  This has been wrong
since we invented expanded objects, so back-patch to all
supported branches.

Discussion: https://postgr.es/m/18722-fd9e645448cc78b4@postgresql.org
When using a pipeline, a transaction starts from the first command and
is committed with a Sync message or when the pipeline ends.

Functions like IsInTransactionBlock() or PreventInTransactionBlock()
were already able to understand a pipeline as being in a transaction
block, but it was not the case of CheckTransactionBlock().  This
function is called for example to generate a WARNING for SET LOCAL,
complaining that it is used outside of a transaction block.

The current state of the code caused multiple problems, like:
- SET LOCAL executed at any stage of a pipeline issued a WARNING, even
if the command was at least second in line where the pipeline is in a
transaction state.
- LOCK TABLE failed when invoked at any step of a pipeline, even if it
should be able to work within a transaction block.

The pipeline protocol assumes that the first command of a pipeline is
not part of a transaction block, and that any follow-up commands is
considered as within a transaction block.

This commit changes the backend so as an implicit transaction block is
started each time the first Execute message of a pipeline has finished
processing, with this implicit transaction block ended once a sync is
processed.  The checks based on XACT_FLAGS_PIPELINING in the routines
checking if we are in a transaction block are not necessary: it is
enough to rely on the existing ones.

Some tests are added to pgbench, that can be backpatched down to v17
when \syncpipeline is involved and down to v14 where \startpipeline and
\endpipeline are available.  This is unfortunately limited regarding the
error patterns that can be checked, but it provides coverage for various
pipeline combinations to check if these succeed or fail.  These tests
are able to capture the case of SET LOCAL's WARNING.  The author has
proposed a different feature to improve the coverage by adding similar
meta-commands to psql where error messages could be checked, something
more useful for the cases where commands cannot be used in transaction
blocks, like REINDEX CONCURRENTLY or VACUUM.  This is considered as
future work for v18~.

Author: Anthonin Bonnefoy
Reviewed-by: Jelte Fennema-Nio, Michael Paquier
Discussion: https://postgr.es/m/CAO6_XqrWO8uNBQrSu5r6jh+vTGi5Oiyk4y8yXDORdE2jbzw8xw@mail.gmail.com
Backpatch-through: 13
Branches before 16 can't be compiled with a C23 compiler (see
deprecation warnings silenced by commit f9a56e7, and non-back-patchable
changes made in 16 by commit 1c27d16).  Test __STDC_VERSION__, and if
it's above C17 then try appending -std=gnu17.  The test is done with the
user's CFLAGS, so an acceptable language version can also be configured
manually that way.

This is done in branches 15 and older, back to 9.2, per policy of
keeping them buildable with modern tools.

Discussion: https://postgr.es/m/87o72eo9iu.fsf%40gentoo.org
Commit 9044fc1 added some files from upstream LLVM.  These files
have different whitespace rules, which make the git whitespace checks
powered by gitattributes fail.  To fix, add those files to the exclude
list.
from commit 9044fc1
…ing.

During pgbench's table initialization, progress updates could display
leftover characters from the previous message if the new message
was shorter. This commit resolves the issue by appending spaces to
the current message to fully overwrite any remaining characters from
the previous line.

Back-patch to all the supported versions.

Author: Yushi Ogiwara, Tatsuo Ishii, Fujii Masao
Reviewed-by: Tatsuo Ishii, Fujii Masao
Discussion: https://postgr.es/m/9a9b8b95b6a709877ae48ad5b0c59bb9@oss.nttdata.com
This reverts commit d77f912 on all stable branches, due to concerns
regarding the compatility side effects this could create in a minor
release.  The change still exists on HEAD.

Discussion: https://postgr.es/m/CA+TgmoZqRgeFTg4+Yf_CMRRXiHuNz1u6ZC4FvVk+rxw0RmOPnw@mail.gmail.com
Backpatch-through: 13
Commit 9044fc1 skipped SectionMemoryManager.h in headerscheck, and
by extension also cpluspluscheck, because it's C++ and would fail both
tests.  That worked in master and REL_17_STABLE due to 7b8e2ae, but
older branches have a separate cpluspluscheck script.  We need to copy
the filtering rule into there too.

This problem was being reported by CI's CompilerWarnings task in the 15
and 16 branches, but was a victim of alert fatigue syndrome (unrelated
problems in the back-branches).

Fix 16, and back-patch to 13, as those are the live branches that have a
separate cpluspluscheck script.
Commit 352f6f2 used %d instead of %lu to format DWORD (unsigned long)
with psprintf().  The _WIN32_WINNT value recently changed for MinGW in
REL_15_STABLE (commit d700e8d), so the code was suddenly being
compiled, with warnings from gcc.

The warnings were already fixed in 16+ by commits 495ed0e and a9bc04b
after the _WIN32_WINNT value was increase there.  14 and 13 didn't warn
because they still use a lower value for MinGW, and supported versions
of Visual Studio should compile the code in all live branches but don't
check our format string.

The change doesn't affect the result: sizeof(int) == sizeof(long) on
this platform, and the values are computed with expressions that cannot
exceed INT_MAX so were never interpreted as negative.

Back-patch the formatting change from those commits into 13-15.  This
should turn CI's 15 branch green again and stop fairywren from warning
about that on 15.

Reported-by: Andres Freund <andres@anarazel.de>
Reported-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/t2vjrcb3bloxf5qqvxjst6r7lvrefqyecxgt2koy5ho5b5glr2%40yuupmm6whgob
The loops over cursor argument variables neglected to ever advance
"prevvar".  The code would accidentally do the right thing anyway
when removing the first or second list entry, but if it had to
remove the third or later entry then it would also remove all
entries between there and the first entry.  AFAICS this would
only matter for cursors that reference out-of-scope variables,
which is a weird Informix compatibility hack; between that and
the lack of impact for short lists, it's not so surprising that
nobody has complained.  Nonetheless it's a pretty obvious bug.

It would have been more obvious if these loops used a more standard
coding style for chasing the linked lists --- this business with the
"prev" pointer sometimes pointing at the current list entry is
confusing and overcomplicated.  So rather than just add a minimal
band-aid, I chose to rewrite the loops in the same style we use
elsewhere, where the "prev" pointer is NULL until we are dealing with
a non-first entry and we save the "next" pointer at the top of the
loop.  (Two of the four loops touched here are not actually buggy,
but it seems better to make them all look alike.)

Coverity discovered this problem, but not until 2b41de4 added code
to free no-longer-needed arguments structs.  With that, the incorrect
link updates are possibly touching freed memory, and it complained
about that.  Nonetheless the list corruption hazard is ancient, so
back-patch to all supported branches.
Previously, it set only DELAY_CHKPT_COMPLETE. That was important,
because it meant that if the XLOG_SMGR_TRUNCATE record preceded a
XLOG_CHECKPOINT_ONLINE record in the WAL, then the truncation would also
happen on disk before the XLOG_CHECKPOINT_ONLINE record was
written.

However, it didn't guarantee that the sync request for the truncation
was processed before the XLOG_CHECKPOINT_ONLINE record was written. By
setting DELAY_CHKPT_START, we guarantee that if an XLOG_SMGR_TRUNCATE
record is written to WAL before the redo pointer of a concurrent
checkpoint, the sync request queued by that operation must be processed
by that checkpoint, rather than being left for the following one.

This is a refinement of commit 412ad7a.  Back-patch to all supported
releases, like that commit.

Author: Robert Haas <robertmhaas@gmail.com>
Reported-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKG%2B-2rjGZC2kwqr2NMLBcEBp4uf59QT1advbWYF_uc%2B0Aw%40mail.gmail.com
check_restrict_nonsystem_relation_kind() correctly uses guc_malloc()
in v16 and later.  But in older branches it must use malloc()
directly, and it forgot to check for failure return.
Faulty backpatching of 66e9444.

Karina Litskevich

Discussion: https://postgr.es/m/CACiT8iZ=atkguKVbpN4HmJFMb4+T9yEowF5JuPZG8W+kkZ9L6w@mail.gmail.com
These format codes produce or consume strings of digits, so they
should be labeled with is_digit = true, but they were not.
This has effect in only one place, where is_next_separator()
is checked to see if the preceding format code should slurp up
all the available digits.  Thus, with a format such as '...SSFF3'
with remaining input '12345', the 'SS' code would consume all
five digits (and then complain about seconds being out of range)
when it should eat only two digits.

Per report from Nick Davies.  This bug goes back to d589f94
where the FFn codes were introduced, so back-patch to v13.

Discussion: https://postgr.es/m/AM8PR08MB6356AC979252CFEA78B56678B6312@AM8PR08MB6356.eurprd08.prod.outlook.com
Yoran Heling reported a case where a data type could be dropped
while references to its OID remain behind in pg_amproc.  This
causes getObjectDescription to fail, which blocks dropping the
operator family (since our DROP code likes to construct descriptions
of everything it's dropping).  The proper fix for this requires
adding more pg_depend entries.  But to allow DROP to go through with
already-corrupt catalogs, tweak getObjectDescription to print "???"
for the type instead of failing when it processes such an entry.

I changed the logic for pg_amop similarly, for consistency,
although it is not known that the problem can manifest in pg_amop.

Per report from Yoran Heling.  Back-patch to all supported
branches (although the problem may be unreachable in v13).

Discussion: https://postgr.es/m/Z1MVCOh1hprjK5Sf@gmai021
Usually an entry in pg_amop or pg_amproc does not need a dependency on
its amoplefttype/amoprighttype/amproclefttype/amprocrighttype types,
because there is an indirect dependency via the argument types of its
referenced operator or procedure, or via the opclass it belongs to.
However, for some support procedures in some index AMs, the argument
types of the support procedure might not mention the column data type
at all.  Also, the amop/amproc entry might be treated as "loose" in
the opfamily, in which case it lacks a dependency on any particular
opclass; or it might be a cross-type entry having a reference to a
datatype that is not its opclass' opcintype.

The upshot of all this is that there are cases where a datatype can
be dropped while leaving behind amop/amproc entries that mention it,
because there is no path in pg_depend showing that those entries
depend on that type.  Such entries are harmless in normal activity,
because they won't get used, but they cause problems for maintenance
actions such as dropping the operator family.  They also cause pg_dump
to produce bogus output.  The previous commit put a band-aid on the
DROP OPERATOR FAMILY failure, but a real fix is needed.

To fix, add pg_depend entries showing that a pg_amop/pg_amproc entry
depends on its lefttype/righttype.  To avoid bloating pg_depend too
much, skip this if the referenced operator or function has that type
as an input type.  (I did not bother with considering the possible
indirect dependency via the opclass' opcintype; at least in the
reported case, that wouldn't help anyway.)

Probably, the reason this has escaped notice for so long is that
add-on datatypes and relevant opclasses/opfamilies are usually
packaged as extensions nowadays, so that there's no way to drop
a type without dropping the referencing opclasses/opfamilies too.
Still, in the absence of pg_depend entries there's nothing that
constrains DROP EXTENSION to drop the opfamily entries before the
datatype, so it seems possible for a DROP failure to occur anyway.

The specific case that was reported doesn't fail in v13, because
v13 prefers to attach the support procedure to the opclass not the
opfamily.  But it's surely possible to construct other edge cases
that do fail in v13, so patch that too.

Per report from Yoran Heling.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/Z1MVCOh1hprjK5Sf@gmai021
Our parallel-mode code only works when we are executing a query
in full, so ExecutePlan must disable parallel mode when it is
asked to do partial execution.  The previous logic for this
involved passing down a flag (variously named execute_once or
run_once) from callers of ExecutorRun or PortalRun.  This is
overcomplicated, and unsurprisingly some of the callers didn't
get it right, since it requires keeping state that not all of
them have handy; not to mention that the requirements for it were
undocumented.  That led to assertion failures in some corner
cases.  The only state we really need for this is the existing
QueryDesc.already_executed flag, so let's just put all the
responsibility in ExecutePlan.  (It could have been done in
ExecutorRun too, leading to a slightly shorter patch -- but if
there's ever more than one caller of ExecutePlan, it seems better
to have this logic in the subroutine than the callers.)

This makes those ExecutorRun/PortalRun parameters unnecessary.
In master it seems okay to just remove them, returning the
API for those functions to what it was before parallelism.
Such an API break is clearly not okay in stable branches,
but for them we can just leave the parameters in place after
documenting that they do nothing.

Per report from Yugo Nagata, who also reviewed and tested
this patch.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/20241206062549.710dc01cf91224809dd6c0e1@sraoss.co.jp
It looks like the example case was once modified to increase the number
of rows but the EXPLAIN ANALYZE output wasn't updated to reflect that.

Also adjust the text which discusses the index sizes.  With the example
table size, the bloom index isn't quite 8 times more space efficient
than the btree indexes.

Discussion: https://postgr.es/m/CAApHDvovx8kQ0=HTt85gFDAwmTJHpCgiSvRmQZ_6u_g-vQYM_w@mail.gmail.com
Backpatch-through: 13, all supported versions
Since commit 97550c0, these failed with
"PANIC:  proc_exit() called in child process" due to uninitialized or
stale MyProcPid.  That was reachable if close() failed in
ClosePostmasterPorts() or setlocale(category, "C") failed, both
unlikely.  Back-patch to v13 (all supported versions).

Discussion: https://postgr.es/m/20241208034614.45.nmisch@google.com
This backpatches commits beb4e9b and 1fb17b1 (originally
appearing in previously in REL_15_STABLE) to REL_14_STABLE.  Performance
of the WAL archiver can become pretty critical at times, and reports
exist of users getting in serious trouble (hours of downtime, loss of
replicas) because of lack of this optimization.

We'd like to backpatch these to REL_13_STABLE too, but because of the
very invasive changes made by commit d75288f in the 14 timeframe,
we deem it too risky :-(

Original commit messages appear below.

Discussion: https://postgr.es/m/202411131605.m66syq5i5ucl@alvherre.pgsql

  commit beb4e9b
  Author:     Robert Haas <rhaas@postgresql.org>
  AuthorDate: Thu Nov 11 15:02:53 2021 -0500

  Improve performance of pgarch_readyXlog() with many status files.

  Presently, the archive_status directory was scanned for each file to
  archive.  When there are many status files, say because archive_command
  has been failing for a long time, these directory scans can get very
  slow.  With this change, the archiver remembers several files to archive
  during each directory scan, speeding things up.

  To ensure timeline history files are archived as quickly as possible,
  XLogArchiveNotify() forces the archiver to do a new directory scan as
  soon as the .ready file for one is created.

  Nathan Bossart, per a long discussion involving many people. It is
  not clear to me exactly who out of all those people reviewed this
  particular patch.

  Discussion: http://postgr.es/m/CA+TgmobhAbs2yabTuTRkJTq_kkC80-+jw=pfpypdOJ7+gAbQbw@mail.gmail.com
  Discussion: http://postgr.es/m/620F3CE1-0255-4D66-9D87-0EADE866985A@amazon.com

  commit 1fb17b1
  Author:     Tom Lane <tgl@sss.pgh.pa.us>
  AuthorDate: Wed Dec 29 17:02:50 2021 -0500

  Fix issues in pgarch's new directory-scanning logic.

  The arch_filenames[] array elements were one byte too small, so that
  a maximum-length filename would get corrupted if another entry
  were made after it.  (Noted by Thomas Munro, fix by Nathan Bossart.)

  Move these arrays into a palloc'd struct, so that we aren't wasting
  a few kilobytes of static data in each non-archiver process.

  Add a binaryheap_reset() call to make it plain that we start the
  directory scan with an empty heap.  I don't think there's any live
  bug of that sort, but it seems fragile, and this is very cheap
  insurance.

  Cleanup for commit beb4e9b, so no back-patch needed.

  Discussion: https://postgr.es/m/CA+hUKGLHAjHuKuwtzsW7uMJF4BVPcQRL-UMZG_HM-g0y7yLkUg@mail.gmail.com
If an owned sequence is considered interesting, force its owning
table to be marked interesting too.  This ensures, in particular,
that we'll fetch the owning table's column names so we have the
data needed for ALTER TABLE ... ADD GENERATED.  Previously there were
edge cases where pg_dump could get SIGSEGV due to not having filled in
the column names.  (The known case is where the owning table has been
made part of an extension while its identity sequence is not a member;
but there may be others.)

Also, if it's an identity sequence, force its dumped-components mask
to exactly match the owning table: dump definition only if we're
dumping the table's definition, dump data only if we're dumping the
table's data, etc.  This generalizes the code introduced in commit
b965f26 that set the sequence's dump mask to NONE if the owning
table's mask is NONE.  That's insufficient to prevent failures,
because for example the table's mask might only request dumping ACLs,
which would lead us to still emit ALTER TABLE ADD GENERATED even
though we didn't create the table.  It seems better to treat an
identity sequence as though it were an inseparable aspect of the
table, matching the treatment used in the backend's dependency logic.
Perhaps this policy needs additional refinement, but let's wait to
see some field use-cases before changing it further.

While here, add a comment in pg_dump.h warning against writing tests
like "if (dobj->dump == DUMP_COMPONENT_NONE)", which was a bug in this
case.  There is one other example in getPublicationNamespaces, which
if it's not a bug is at least remarkably unclear and under-documented.
Changing that requires a separate discussion, however.

Per report from Artur Zakirov.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/CAKNkYnwXFBf136=u9UqUxFUVagevLQJ=zGd5BsLhCsatDvQsKQ@mail.gmail.com
An \if command appearing within a false (not-to-be-executed) \if
branch was incorrectly treated the same as \elif.  This could allow
statements within the inner \if to be executed when they should
not be.  Also the missing inner \if stack entry would result in an
assertion failure (in assert-enabled builds) when the final \endif
is reached.

Report and patch by Michail Nikolaev.  Back-patch to all
supported branches.

Discussion: https://postgr.es/m/CANtu0oiA1ke=SP6tauhNqkUdv5QFsJtS1p=aOOf_iU+EhyKkjQ@mail.gmail.com
tglsfdc and others added 19 commits January 30, 2025 15:36
smgrDoPendingSyncs had two distinct risks of integer overflow while
deciding which way to ensure durability of a newly-created relation.
First, it accumulated the total size of all forks in a variable of
type BlockNumber (uint32).  While we restrict an individual fork's
size to fit in that, I don't believe there's such a restriction on
all of them added together.  Second, it proceeded to multiply the
sum by BLCKSZ, which most certainly could overflow a uint32.

(The exact expression is total_blocks * BLCKSZ / 1024.  The
compiler might choose to optimize that to total_blocks * 8,
which is not at quite as much risk of overflow as a literal
reading would be, but it's still wrong.)

If an overflow did occur it could lead to a poor choice to
shove a very large relation into WAL instead of fsync'ing it.
This wouldn't be fatal, but it could be inefficient.

Change total_blocks to uint64 which should be plenty, and
rearrange the comparison calculation to be overflow-safe.

I noticed this while looking for ramifications of the proposed
change in MAX_KILOBYTES.  It's not entirely clear to me why
wal_skip_threshold is limited to MAX_KILOBYTES in the
first place, but in any case this code is unsafe regardless
of the range of wal_skip_threshold.

Oversight in c6b9204 which introduced wal_skip_threshold,
so back-patch to v13.

Discussion: https://postgr.es/m/1a01f0-66ec2d80-3b-68487680@27595217
Backpatch-through: 13
The top comment of StrategySyncStart() mentions BufferSync(), but this
function calls BgBufferSync(), not BufferSync().

Oversight in 9cd00c4.

Author: Ashutosh Bapat
Discussion: https://postgr.es/m/CAExHW5tgkjag8i-s=RFrCn5KAWDrC4zEPPkfUKczfccPOxBRQQ@mail.gmail.com
Backpatch-through: 13
logging_collector was only mentioning stderr and csvlog, and forgot
about jsonlog.  Oversight in dc68668, that has added support for
jsonlog in log_destination.

While on it, the description in the GUC table is tweaked to be more
consistent with the documentation and postgresql.conf.sample.

Author: Umar Hayat
Reviewed-by: Ashutosh Bapat, Tom Lane
Discussion: https://postgr.es/m/CAD68Dp1K_vBYqBEukHw=1jF7e76t8aszGZTFL2ugi=H7r=a7MA@mail.gmail.com
Backpatch-through: 13
A few of the version checks in vacuum_one_database() do not call
PQfinish() before exiting.  This precedent was unintentionally
established in commit 00d1e88, and while it's probably not too
problematic, it seems better to properly close the connection.

Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/Z6JAwqN1I8ljTuXp%40nathan
Backpatch-through: 13
Protect against malformed timestamps.  Also protect against negative WalSegSz
as it triggers division by zero:

((0x100000000UL) / (WalSegSz)) can turn into zero in

XLogFileName(xlogfilename, ControlFile->checkPointCopy.ThisTimeLineID,
             segno, WalSegSz);

because if WalSegSz is -1 then by arithmetic rules in C we get
0x100000000UL / 0xFFFFFFFFFFFFFFFFUL == 0.

Author: Ilyasov Ian <ianilyasov@outlook.com>
Author: Anton Voloshin <a.voloshin@postgrespro.ru>
Backpatch-through: 13
Two links in the isn module documentation were pointing to tools
which had been moved, resulting in 404 error responses.  Update
to the new URLs for the tools.  The link to the Sequoia 2000 page
in the history section was no longer working, and since the page
is no longer available online update our link to point at the
paper instead which is on a stable URL.

These links exist in all versions of the documentation so backpatch
to all supported branches.

Author: Daniel Gustafsson <daniel@yesql.se>
Reported-by: charukiewicz@protonmail.com
Discussion: https://postgr.es/m/173679670185.705.8565555804465055355@wrigleys.postgresql.org
Backpatch-through: 13
Try to make it absolutely plain that we don't retain the
originally specified time zone, only the UTC timestamp.

While at it, make glossary entries for "UTC" and "GMT".

Author: Robert Treat <rob@xzilla.net>
Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/173796426022.1064.9135167366862649513@wrigleys.postgresql.org
Backpatch-through: 13
Commit af35fe5 caused "pgbench -i" to emit a '\r' character
for each data row loaded (when stderr is a terminal).
That's effectively invisible on-screen, but it causes the
connected terminal program to consume a lot of cycles.
It's even worse if you're connected over ssh, as the data
then has to pass through the ssh tunnel.

Simplest fix is to move the added logic inside the if-tests
that check whether to print a progress line.  We could do
it another way that avoids duplicating these few lines,
but on the whole this seems the most transparent way to
write it.

Like the previous commit, back-patch to all supported versions.

Reported-by: Andres Freund <andres@anarazel.de>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/4k4drkh7bcmdezq6zbkhp25mnrzpswqi2o75d5uv2eeg3aq6q7@b7kqdmzzwzgb
Backpatch-through: 13
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 7ebf0f8af5adf2da3fd838965b89fbfe5850f208
There are cases where we cannot / do not want to error out for invalidly
encoded input. In such cases it can be useful to replace e.g. an incomplete
multi-byte characters with bytes that will trigger an error when getting
validated as part of a larger string.

Unfortunately, until now, for some encoding no such sequence existed. For
those encodings this commit removes one previously accepted input combination
- we consider that to be ok, as the chosen bytes are outside of the valid
ranges for the encodings, we just previously failed to detect that.

As we cannot add a new field to pg_wchar_table without breaking ABI, this is
implemented "in-line" in the newly added function.

Author: Noah Misch <noah@leadboat.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Backpatch-through: 13
Security: CVE-2025-1094
This commit adds fmtIdEnc() and fmtQualifiedIdEnc(), which allow to specify
the encoding as an explicit argument.  Additionally setFmtEncoding() is
provided, which defines the encoding when no explicit encoding is provided, to
avoid breaking all code using fmtId().

All users of fmtId()/fmtQualifiedId() are either converted to the explicit
version or a call to setFmtEncoding() has been added.

This commit does not yet utilize the now well-defined encoding, that will
happen in a subsequent commit.

Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Backpatch-through: 13
Security: CVE-2025-1094
Previously invalidly encoded input to various escaping functions could lead to
the escaped string getting incorrectly parsed by psql.  To be safe, escaping
functions need to ensure that neither invalid nor incomplete multi-byte
characters can be used to "escape" from being quoted.

Functions which can report errors now return an error in more cases than
before. Functions that cannot report errors now replace invalid input bytes
with a byte sequence that cannot be used to escape the quotes and that is
guaranteed to error out when a query is sent to the server.

The following functions are fixed by this commit:
- PQescapeLiteral()
- PQescapeIdentifier()
- PQescapeString()
- PQescapeStringConn()
- fmtId()
- appendStringLiteral()

Reported-by: Stephen Fewer <stephen_fewer@rapid7.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Backpatch-through: 13
Security: CVE-2025-1094
As highlighted by the prior commit, writing correct escape functions is less
trivial than one might hope.

This test module tries to verify that different escaping functions behave
reasonably. It e.g. tests:

- Invalidly encoded input to an escape function leads to invalidly encoded
  output

- Trailing incomplete multi-byte characters are handled sensibly

- Escaped strings are parsed as single statement by psql's parser (which
  derives from the backend parser)

There are further tests that would be good to add. But even in the current
state it was rather useful for writing the fix in the prior commit.

Reviewed-by: Noah Misch <noah@leadboat.com>
Backpatch-through: 13
Security: CVE-2025-1094
On machines where char is unsigned this could lead to option parsing looping
endlessly. It's also too narrow a type on other hardware.

Found via Tom Lane's monitoring of the buildfarm.

Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Security: CVE-2025-1094
Backpatch-through: 13
We need to tell fmtId() what encoding to assume, but this function
doesn't know that.  Fortunately we can fix that without changing the
function's API, because we can just use SQL_ASCII.  That's because
database names in connection requests are effectively binary not text:
no encoding-aware processing will happen on them.

This fixes XversionUpgrade failures seen in the buildfarm.  The
alternative of having pg_upgrade use setFmtEncoding() is unappetizing,
given that it's connecting to multiple databases that may have
different encodings.

Andres Freund, Noah Misch, Tom Lane

Security: CVE-2025-1094
@ololobus ololobus merged commit 62a86df into REL_14_STABLE_neon Feb 13, 2025
1 check passed
@ololobus ololobus deleted the alexk/pg-14-16 branch February 13, 2025 10:51
github-merge-queue bot pushed a commit to neondatabase/neon that referenced this pull request Feb 13, 2025
## Summary of changes

Bump all minor versions. The only non-trivial conflict was between
-
postgres/postgres@0350b87
- and
neondatabase/postgres@bd09a75

It seems that just adding this extra argument is enough.

I also got conflict with

postgres/postgres@c1c9df3
but for some reason only in PG 15. Yet, that was a trivial one around
```c
		if (XLogCtl)
			LWLockRelease(ControlFileLock);
		/* durable_rename already emitted log message */
		return false;
```
in `xlog.c`

## Postgres PRs

- neondatabase/postgres#580
- neondatabase/postgres#579
- neondatabase/postgres#577
- neondatabase/postgres#578
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.