This repository was archived by the owner on Jan 5, 2026. It is now read-only.
Merged
Conversation
create_merge_append_plan failed to honor the CP_EXACT_TLIST flag: it would generate the expected targetlist but then it felt free to add resjunk sort targets to it. This demonstrably leads to assertion failures in v11 and HEAD, and it's probably just accidental that we don't see the same in older branches. I've not looked into whether there would be any real-world consequences in non-assert builds. In HEAD, create_append_plan has sprouted the same problem, so fix that too (although we do not have any test cases that seem able to reach that bug). This is an oversight in commit 3fc6e2d which invented the CP_EXACT_TLIST flag, so back-patch to 9.6 where that came in. convert_subquery_pathkeys would create pathkeys for subquery output values if they match any EquivalenceClass known in the outer query and are available in the subquery's syntactic targetlist. However, the second part of that condition is wrong, because such values might not appear in the subquery relation's reltarget list, which would mean that they couldn't be accessed above the level of the subquery scan. We must check that they appear in the reltarget list, instead. This can lead to dropping knowledge about the subquery's sort ordering, but I believe it's okay, because any sort key that the outer query actually has any interest in would appear in the reltarget list. This second issue is of very long standing, but right now there's no evidence that it causes observable problems before 9.6, so I refrained from back-patching further than that. We can revisit that choice if somebody finds a way to make it cause problems in older branches. (Developing useful test cases for these issues is really problematic; fixing convert_subquery_pathkeys removes the only known way to exhibit the create_merge_append_plan bug, and neither of the test cases added by this patch causes a problem in all branches, even when considering the issues separately.) The second issue explains bug #15795 from Suresh Kumar R ("could not find pathkey item to sort" with nested DISTINCT queries). I stumbled across the first issue while investigating that. Discussion: https://postgr.es/m/15795-fadb56c8e44ee73c@postgresql.org
There's a very old race condition in our code to see whether a pre-existing shared memory segment is still in use by a conflicting postmaster: it's possible for the other postmaster to remove the segment in between our shmctl() and shmat() calls. It's a narrow window, and there's no risk unless both postmasters are using the same port number, but that's possible during parallelized "make check" tests. (Note that while the TAP tests take some pains to choose a randomized port number, pg_regress doesn't.) If it does happen, we treated that as an unexpected case and errored out. To fix, allow EINVAL to be treated as segment-not-present, and the same for EIDRM on Linux. AFAICS, the considerations here are basically identical to the checks for acceptable shmctl() failures, so I documented and coded it that way. While at it, adjust PGSharedMemoryAttach's API to remove its undocumented dependency on UsedShmemSegAddr in favor of passing the attach address explicitly. This makes it easier to be sure we're using a null shmaddr when probing for segment conflicts (thus avoiding questions about what EINVAL means). I don't think there was a bug there, but it required fragile assumptions about the state of UsedShmemSegAddr during PGSharedMemoryIsInUse. Commit c098509 may have made this failure more probable by applying the conflicting-segment tests more often. Hence, back-patch to all supported branches, as that was. Discussion: https://postgr.es/m/22224.1557340366@sss.pgh.pa.us
When failing to reindex a table or an index, reindexdb would generate an extra error message related to a database failure, which is misleading. Backpatch all the way down, as this has been introduced by 85e9a5a. Discussion: https://postgr.es/m/CAOBaU_Yo61RwNO3cW6WVYWwH7EYMPuexhKqufb2nFGOdunbcHw@mail.gmail.com Author: Julien Rouhaud Reviewed-by: Daniel Gustafsson, Álvaro Herrera, Tom Lane, Michael Paquier Backpatch-through: 9.4
The buildfarm client uses TEMP_CONFIG to implement its extra_config setting. Except for stats_temp_directory, extra_config now applies to TAP suites; extra_config values seen in the past month are compatible with this. Back-patch to 9.6, where PostgresNode was introduced, so the buildfarm can rely on it sooner. Reviewed by Andrew Dunstan and Tom Lane. Discussion: https://postgr.es/m/20181229021950.GA3302966@rfd.leadboat.com
…ion. We long ago decided to design the shared PgBackendStatus data structure to minimize the cost of writing status updates, which means that writers just have to increment the st_changecount field twice. That isn't hooked into any sort of resource management mechanism, which means that if something were to throw error between the two increments, the st_changecount field would be left odd indefinitely. That would cause readers to lock up. Now, since it's also a bad idea to leave the field odd for longer than absolutely necessary (because readers will spin while we have it set), the expectation was that we'd treat these segments like spinlock critical sections, with only short, more or less straight-line, code in them. That was fine as originally designed, but commit 9029f4b broke it by inserting a significant amount of non-straight-line code into pgstat_bestart(), code that is very capable of throwing errors, not to mention taking a significant amount of time during which readers will spin. We have a report from Neeraj Kumar of readers actually locking up, which I suspect was due to an encoding conversion error in X509_NAME_to_cstring, though conceivably it was just a garden-variety OOM failure. Subsequent commits have loaded even more dubious code into pgstat_bestart's critical section (and commit fc70a4b deserves some kind of booby prize for managing to miss the critical section entirely, although the negative consequences seem minimal given that the PgBackendStatus entry should be seen by readers as inactive at that point). The right way to fix this mess seems to be to compute all these values into a local copy of the process' PgBackendStatus struct, and then just copy the data back within the critical section proper. This plan can't be implemented completely cleanly because of the struct's heavy reliance on out-of-line strings, which we must initialize separately within the critical section. But still, the critical section is far smaller and safer than it was before. In hopes of forestalling future errors of the same ilk, rename the macros for st_changecount management to make it more apparent that the writer-side macros create a critical section. And to prevent the worst consequences if we nonetheless manage to mess it up anyway, adjust those macros so that they really are a critical section, ie they now bump CritSectionCount. That doesn't add much overhead, and it guarantees that if we do somehow throw an error while the counter is odd, it will lead to PANIC and a database restart to reset shared memory. Back-patch to 9.5 where the problem was introduced. In HEAD, also fix an oversight in commit b0b39f7: it failed to teach pgstat_read_current_status to copy st_gssstatus data from shared memory to local memory. Hence, subsequent use of that data within the transaction would potentially see changing data that it shouldn't see. Discussion: https://postgr.es/m/CAPR3Wj5Z17=+eeyrn_ZDG3NQGYgMEOY6JV6Y-WRRhGgwc16U3Q@mail.gmail.com
The function had been interpreting SQL_ASCII messages as UTF8, throwing an error when they were invalid UTF8. The new behavior is consistent with pg_do_encoding_conversion(). This affects LOG_DESTINATION_STDERR and LOG_DESTINATION_EVENTLOG, which will send untranslated bytes to write() and ReportEventA(). On buildfarm member bowerbird, enabling log_connections caused an error whenever the role name was not valid UTF8. Back-patch to 9.4 (all supported versions). Discussion: https://postgr.es/m/20190512015615.GD1124997@rfd.leadboat.com
A bounded quantifier with m = n = 1 might be thought a no-op. But
according to our documentation (which traces back to Henry Spencer's
original man page) it still imposes greediness, or non-greediness in the
case of the non-greedy variant "{1,1}?", on whatever it's attached to.
This turns out not to work though, because parseqatom() optimizes away
the m = n = 1 case without regard for whether it's supposed to change
the greediness of the argument RE.
We can fix this by just not applying the optimization when the greediness
needs to change; the subsequent general cases handle it fine.
The three cases in which we can still apply the optimization are
(a) no quantifier, or quantifier does not impose a preference;
(b) atom has no greediness property, implying it cannot match a
variable amount of text anyway; or
(c) quantifier's greediness is same as atom's.
Note that in most cases where one of these applies, we'd have exited
earlier in the "not a messy case" fast path. I think it's now only
possible to get to the optimization when the atom involves capturing
parentheses or a non-top-level backref.
Back-patch to all supported branches. I'd ordinarily be hesitant to
put a subtle behavioral change into back branches, but in this case
it's very hard to see a reason why somebody would write "{1,1}?" unless
they're trying to get the documented change-of-greediness behavior.
Discussion: https://postgr.es/m/5bb27a41-350d-37bf-901e-9d26f5592dd0@charter.net
pgtls_read_pending is declared to return bool, but what the underlying SSL_pending function returns is a count of available bytes. This is actually somewhat harmless if we're using C99 bools, but in the back branches it's a live bug: if the available-bytes count happened to be a multiple of 256, it would get converted to a zero char value. On machines where char is signed, counts of 128 and up could misbehave as well. The net effect is that when using SSL, libpq might block waiting for data even though some has already been received. Broken by careless refactoring in commit 4e86f1b, so back-patch to 9.5 where that came in. Per bug #15802 from David Binderman. Discussion: https://postgr.es/m/15802-f0911a97f0346526@postgresql.org
Commit 8fa30f9 reduced the elevel of a number of "can't happen" _bt_split() errors from PANIC to ERROR. At the same time, the new right page buffer for the split could continue to be acquired well before the critical section. This was possible because it was relatively straightforward to make sure that _bt_split() could not throw an error, with a few specific exceptions. The exceptional cases were safe because they involved specific, well understood errors, making it possible to consistently zero the right page before actually raising an error using elog(). There was no danger of leaving around a junk page, provided _bt_split() stuck to this coding rule. Commit 8224de4, which introduced INCLUDE indexes, added code to make _bt_split() truncate away non-key attributes. This happened at a point that broke the rule around zeroing the right page in _bt_split(). If truncation failed (perhaps due to palloc() failure), that would result in an errant right page buffer with junk contents. This could confuse VACUUM when it attempted to delete the page, and should be avoided on general principle. To fix, reorganize _bt_split() so that truncation occurs before the new right page buffer is even acquired. A junk page/buffer will not be left behind if _bt_nonkey_truncate()/_bt_truncate() raise an error. Discussion: https://postgr.es/m/CAH2-WzkcWT_-NH7EeL=Az4efg0KCV+wArygW8zKB=+HoP=VWMw@mail.gmail.com Backpatch: 11-, where INCLUDE indexes were introduced.
Only hand-assigned type OIDs should be presumed to match across different PG servers; those assigned during genbki.pl or during initdb are likely to change due to addition or removal of unrelated objects. This means that the cutoff should be FirstGenbkiObjectId (in HEAD) or FirstBootstrapObjectId (before that), not FirstNormalObjectId. Compare postgres_fdw's is_builtin() test. It's likely that this error has no observable consequence in a normally-functioning system, since ATM the only affected type OIDs are system catalog rowtypes and information_schema types, which would not typically be interesting for logical replication. But you could probably break it if you tried hard, so back-patch. Discussion: https://postgr.es/m/15150.1557257111@sss.pgh.pa.us
An upcoming HEAD-only patch will standardize the terminology around ItemIdData variables/line pointers, ending the practice of referring to them as "item pointers". Make the "Database Page Layout" docs consistent with the new policy. The term "item identifier" is already used in the same section, so stick with that. Discussion: https://postgr.es/m/CAH2-Wz=c=MZQjUzde3o9+2PLAPuHTpVZPPdYxN=E4ndQ2--8ew@mail.gmail.com Backpatch: All supported branches.
The conditions listed in this comment have changed several times, and at some point the thing that the "if so" referred to was negated. The text was OK up to 9.6. It was differently wrong in v10, v11 and master, so fix in all those versions.
This path previously was not reliably covered. There was some heuristic coverage via insert-conflict-toast.spec, but that test is not deterministic, and only tested for a somewhat specific bug. Backpatch, as this is a complicated and otherwise untested code path. Unfortunately 9.5 cannot handle two waiting sessions, and thus cannot execute this test. Triggered by a conversion with Melanie Plageman. Author: Andres Freund Discussion: https://postgr.es/m/CAAKRu_a7hbyrk=wveHYhr4LbcRnRCG=yPUVoQYB9YO1CdUBE9Q@mail.gmail.com Backpatch: 9.5-
Cross-type comparison operators in a btree or hash opclass might be only stable not immutable (this is true of timestamp vs. timestamptz for example). partprune.c ignored this possibility and would perform plan-time pruning with them anyway, possibly leading to wrong answers if the environment changed between planning and execution. To fix, teach gen_partprune_steps() to do things differently when creating plan-time pruning steps vs. run-time pruning steps. analyze_partkey_exprs() also needs an extra check, which is rather annoying but now is not the time to restructure things enough to avoid that. While at it, simplify the logic for the plan-time case a little by insisting that the comparison value be a Const and nothing else. This relies on the assumption that eval_const_expressions will have reduced any immutable expression to a Const; which is not quite 100% true, but certainly any case that comes up often enough to be interesting should have simplification logic there. Also improve a bunch of inadequate/obsolete/wrong comments. Per discussion of a report from Alan Jackson (though this fixes only one aspect of that problem). Back-patch to v11 where this code came in. David Rowley, with some further hacking by me Discussion: https://postgr.es/m/FAD28A83-AC73-489E-A058-2681FA31D648@tvsquared.com
gen_prune_steps_from_opexps's notion of how to do this was overly complicated and underly correct. Per discussion of a report from Alan Jackson (though this fixes only one aspect of that problem). Back-patch to v11 where this code came in. Amit Langote Discussion: https://postgr.es/m/FAD28A83-AC73-489E-A058-2681FA31D648@tvsquared.com
Previously, gen_partprune_steps() always built executor pruning steps using all suitable clauses, including those containing PARAM_EXEC Params. This meant that the pruning steps were only completely safe for executor run-time (scan start) pruning. To prune at executor startup, we had to ignore the steps involving exec Params. But this doesn't really work in general, since there may be logic changes needed as well --- for example, pruning according to the last operator's btree strategy is the wrong thing if we're not applying that operator. The rules embodied in gen_partprune_steps() and its minions are sufficiently complicated that tracking their incremental effects in other logic seems quite impractical. Short of a complete redesign, the only safe fix seems to be to run gen_partprune_steps() twice, once to create executor startup pruning steps and then again for run-time pruning steps. We can save a few cycles however by noting during the first scan whether we rejected any clauses because they involved exec Params --- if not, we don't need to do the second scan. In support of this, refactor the internal APIs in partprune.c to make more use of passing information in the GeneratePruningStepsContext struct, rather than as separate arguments. This is, I hope, the last piece of our response to a bug report from Alan Jackson. Back-patch to v11 where this code came in. Discussion: https://postgr.es/m/FAD28A83-AC73-489E-A058-2681FA31D648@tvsquared.com
When this suite runs installcheck, redirect file creations from src/test/regress to src/bin/pg_upgrade/tmp_check/regress. This closes a race condition in "make -j check-world". If the pg_upgrade suite wrote to a given src/test/regress/results file in parallel with the regular src/test/regress invocation writing it, a test failed spuriously. Even without parallelism, in "make -k check-world", the suite finishing second overwrote the other's regression.diffs. This revealed test "largeobject" assuming @abs_builddir@ is getcwd(), so fix that, too. Buildfarm client REL_10, released forty-five days ago, supports saving regression.diffs from its new location. When an older client reports a pg_upgradeCheck failure, it will no longer include regression.diffs. Back-patch to 9.5, where pg_upgrade moved to src/bin. Reviewed by Andrew Dunstan. Discussion: https://postgr.es/m/20181224034411.GA3224776@rfd.leadboat.com
This reverts commit bd1592e. It had multiple defects. Discussion: https://postgr.es/m/12717.1558304356@sss.pgh.pa.us
Discussion: https://postgr.es/m/92961161-9b49-e42f-0a72-d5d47e0ed4de@postgrespro.ru Author: Liudmila Mantrova Reviewed-by: Jonathan Katz, Tom Lane, Michael Paquier Backpatch-through: 9.4
…argument. For partial aggregation combine steps, AggStatePerTrans->numTransInputs was set to the transition function's number of inputs, rather than the combine function's number of inputs (always 1). That lead to partial aggregates with strict combine functions to wrongly check for NOT NULL input as required by strictness. When the aggregate wasn't exactly passed one argument, the strictness check was either omitted (in the 0 args case) or too many arguments were checked. In the latter case we'd read beyond the end of FunctionCallInfoData->args (only in master). AggStatePerTrans->numTransInputs actually has been wrong since since 9.6, where partial aggregates were added. But it turns out to not be an active problem in 9.6 and 10, because numTransInputs wasn't used at all for combine functions: Before c253b72 there simply was no NULL check for the input to strict trans functions, and after that the check was simply hardcoded for the right offset in fcinfo, as it's done by code specific to combine functions. In bf6c614 (11) the strictness check was generalized, with common code doing the strictness checks for both plain and combine transition functions, based on numTransInputs. For combine functions this lead to not emitting an expression step to check for strict input in the 0 arguments case, and in the > 1 arguments case, we'd check too many arguments.Due to the fact that the relevant fcinfo->isnull[2..] was always zero-initialized (more or less by accident, by being part of the AggStatePerTrans struct, which is palloc0'ed), there was no observable damage in the latter case before a9c35cf, we just checked too many array elements. Due to the changes in a9c35cf, > 1 argument bug became visible, because these days fcinfo is a) dynamically allocated without being zeroed b) exactly the length required for the number of specified arguments (hardcoded to 2 in this case). This commit only contains a fairly minimal fix, setting numTransInputs to a hardcoded 1 when building a pertrans for a combine function. It seems likely that we'll want to clean this up further (e.g. the arguments build_pertrans_for_aggref() aren't particularly meaningful for combine functions). But the wrap date for 12 beta1 is coming up fast, so it seems good to have a minimal fix in place. Backpatch to 11. While AggStatePerTrans->numTransInputs was set wrongly before that, the value was not used for combine functions. Reported-By: Rajkumar Raghuwanshi Diagnosed-By: Kyotaro Horiguchi, Jeevan Chalke, Andres Freund, David Rowley Author: David Rowley, Kyotaro Horiguchi, Andres Freund Discussion: https://postgr.es/m/CAKcux6=uZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew@mail.gmail.com
This uses a method similar to 68a7c24, which guarantees that GRANT commands using the WITH GRANT OPTION are dumped in a way so as cascading dependencies are respected. As databases do not have support for initial privileges via pg_init_privs, we need to repeat again the same ACL reordering method. ACL for databases have been moved from pg_dumpall to pg_dump in v11, so this impacts pg_dump for v11 and above, and pg_dumpall for v9.6 and v10. Discussion: https://postgr.es/m/15788-4e18847520ebcc75@postgresql.org Author: Nathan Bossart Reviewed-by: Haribabu Kommi Backpatch-through: 9.6
This uses a method similar to 68a7c24 and now b8c6014 (applied for database creation), which guarantees that GRANT commands using the WITH GRANT OPTION are dumped in a way so as cascading dependencies are respected. Note that tablespaces do not have support for initial privileges via pg_init_privs, so the same method needs to be applied again. It would be nice to merge all the logic generating ACL queries in dumps under the same banner, but this requires extending the support of pg_init_privs to objects that cannot use it yet, so this is left as future work. Discussion: https://postgr.es/m/20190522071555.GB1278@paquier.xyz Author: Michael Paquier Reviewed-by: Nathan Bossart Backpatch-through: 9.6
When there were duplicate columns in the hash key list, the array sizes could be miscomputed, resulting in access off the end of the array. Adjust the computation to ensure the array is always large enough. (I considered whether the duplicates could be removed in planning, but I can't rule out the possibility that duplicate columns might have different hash functions assigned. Simpler to just make sure it works at execution time regardless.) Bug apparently introduced in fc4b3de as part of narrowing down the tuples stored in the hashtable. Reported by Colm McHugh of Salesforce, though I didn't use their patch. Backpatch back to version 10 where the bug was introduced. Discussion: https://postgr.es/m/CAFeeJoKKu0u+A_A9R9316djW-YW3-+Gtgvy3ju655qRHR3jtdA@mail.gmail.com
…bindir. On master (after 700538) the old version's installed psql was used - even when the old version might not actually be installed / might be installed into a temporary directory. As commonly the case when just executing make check for pg_upgrade, as $oldbindir is just the current version's $bindir. In the back branches, with --install specified, psql from the new version's temporary installation was used, without --install (e.g for NO_TEMP_INSTALL, cf 47b3c26), the new version's installed psql was used (which might or might not exist). Author: Andres Freund Discussion: https://postgr.es/m/20190522175150.c26f4jkqytahajdg@alap3.anarazel.de
Per bug #15819 from Koizumi Satoru. Discussion: https://postgr.es/m/15819-e6191bef1f7334c0@postgresql.org
This allows "vcregress upgradecheck" to pass twice in immediate succession, and it's more like how $(prove_check) works. Back-patch to 9.5, where pg_upgrade moved to src/bin. Discussion: https://postgr.es/m/20190520012436.GA1480421@rfd.leadboat.com
When this suite runs installcheck, redirect file creations from src/test/regress to src/bin/pg_upgrade/tmp_check/regress. This closes a race condition in "make -j check-world". If the pg_upgrade suite wrote to a given src/test/regress/results file in parallel with the regular src/test/regress invocation writing it, a test failed spuriously. Even without parallelism, in "make -k check-world", the suite finishing second overwrote the other's regression.diffs. This revealed test "largeobject" assuming @abs_builddir@ is getcwd(), so fix that, too. Buildfarm client REL_10, released fifty-four days ago, supports saving regression.diffs from its new location. When an older client reports a pg_upgradeCheck failure, it will no longer include regression.diffs. Back-patch to 9.5, where pg_upgrade moved to src/bin. Reviewed (in earlier versions) by Andrew Dunstan. Discussion: https://postgr.es/m/20181224034411.GA3224776@rfd.leadboat.com
Most errors reported in extended statistics are internal issues, and so should use elog(). The MCV list code was already following this rule, but the functional dependencies and ndistinct coefficients were using a mix of elog() and ereport(). Fix this by changing most places to elog(), with the exception of input functions. This is a mostly cosmetic change, it makes the life a little bit easier for translators, as elog() messages are not translated. So backpatch to PostgreSQL 10, where extended statistics were introduced. Author: Tomas Vondra Backpatch-through: 10 where extended statistics were added Discussion: https://postgr.es/m/20190503154404.GA7478@alvherre.pgsql
Rename some exposed parameters so that they don't conflict with C++ reserved words. Back-patch to all supported versions. George Tarasov Discussion: https://postgr.es/m/b517ec3918d645eb950505eac8dd434e@gaz-is.ru
Per gripe from Ian Barwick Co-authored-by: Ian Barwick <ian@2ndquadrant.com> Discussion: https://postgr.es/m/CABvVfJWNnNKb8cHsTLhkTsvL1+G6BVcV+57+w1JZ61p8YGPdWQ@mail.gmail.com
The table has not been updated for some commands introduced in recent releases, so refresh it. While on it, reorder entries alphabetically. Backpatch all the way down for all the commands which have gone missing. Reported-by: Jeremy Smith Discussion: https://postgr.es/m/15883-afff0ea3cc2dbbb6@postgresql.org Backpatch-through: 9.4
Early previews of LLVM 9 reveal that our Min() macro causes compiler errors in LLVM headers reached by the #include directives in llvmjit_inline.cpp. Let's just undefine it. Per buildfarm animal seawasp. Back-patch to 11. Reviewed-by: Fabien Coelho, Tom Lane Discussion: https://postgr.es/m/20190606173216.GA6306%40alvherre.pgsql
A "break" statement erroneously left behind by commit a1c1af2 caused TopoSort to do the wrong thing if a lock's wait list contained multiple members of the same locking group. Because parallel workers don't normally need any locks not already taken by their leader, this is very hard --- maybe impossible --- to hit in production. Still, if it did happen, the queries involved in an otherwise-resolvable deadlock would block until canceled. In addition to removing the bogus "break", add an Assert showing that the conflicting uses of the beforeConstraints[] array (for both counts and flags) don't overlap, and add some commentary explaining why not; because it's not obvious without explanation, IMHO. Original report and patch from Rui Hai Jiang; additional assert and commentary by me. Back-patch to 9.6 where the bug came in. Discussion: https://postgr.es/m/CAEri+mLd3bpHLyW+a9pSe1y=aEkeuJpwBSwvo-+m4n7-ceRmXw@mail.gmail.com
When performing ANALYZE on inheritance trees, we collect two samples for each relation - one for the relation alone, and one for the inheritance subtree (relation and its child relations). And then we build statistics on each sample, so for each relation we get two sets of statistics. For regular (per-column) statistics this works fine, because the catalog includes a flag differentiating statistics built from those two samples. But we don't have such flag in the extended statistics catalogs, and we ended up updating the same row twice, triggering this error: ERROR: tuple already updated by self The simplest solution is to disable extended statistics on inheritance trees, which is what this commit is doing. In the future we may need to do something similar to per-column statistics, but that requires adding a flag to the catalog - and that's not backpatchable. Moreover, the current selectivity estimation code only works with individual relations, so building statistics on inheritance trees would be pointless anyway. Author: Tomas Vondra Backpatch-to: 10- Discussion: https://postgr.es/m/20190618231233.GA27470@telsasoft.com Reported-by: Justin Pryzby
This has been wrong ever since pg_rewind was added. The if-branch just above this, where we print the same error with an extra message supplied by XLogReadRecord() got this right, but the variable name was wrong in the else-branch. As a consequence, the error printed the WAL position as 0/0 if there was an error reading a WAL file. Backpatch to 9.5, where pg_rewind was added.
Since pg_dump doesn't treat the member operators and functions of operator classes/families (that is, the pg_amop and pg_amproc entries, not the underlying operators/functions) as separate dumpable objects, it missed their dependency information. I think this was safe when the code was designed, because the default object sorting rule emits operators and functions before opclasses, and there were no dependency types that could mess that up. However, the introduction of range types in 9.2 broke it: now a type can have a dependency on an opclass, allowing dependency rules to push the opclass before the type and hence before custom operators. Lacking any information showing that it shouldn't do so, pg_dump emitted the objects in the wrong order. Fix by teaching getDependencies() to translate pg_depend entries for pg_amop/amproc rows to look like dependencies for their parent opfamily. I added a regression test for this in HEAD/v12, but not further back; life is too short to fight with 002_pg_dump.pl. Per bug #15934 from Tom Gottfried. Back-patch to all supported branches. Discussion: https://postgr.es/m/15934-58b8c8ab7a09ea15@postgresql.org
When piling up loading of modules using check_password_hook_type, loading passwordcheck would remove any trace of a previously-loaded hook. Unloading the module would also cause previous hooks to be entirely gone. Reported-by: Rafael Castro Author: Michael Paquier Reviewed-by: Daniel Gustafsson Discussion: https://postgr.es/m/15932-78f48f9ef166778c@postgresql.org Backpatch-through: 9.4
As usual, the release notes for other branches will be made by cutting these down, but put them up for community review first.
When querying a partitioned table containing a default partition, we were wrongly deciding to include it in the scan too early in the process, failing to exclude it in some cases. If we reinterpret the PruneStepResult.scan_default flag slightly, we can do a better job at detecting that it can be excluded. The change is that we avoid setting the flag for that pruning step unless the step absolutely requires the default partition to be scanned (in contrast with the previous arrangement, which was to set it unless the step was able to prune it). So get_matching_partitions() must explicitly check the partition that each returned bound value corresponds to in order to determine whether the default one needs to be included, rather than relying on the flag from the final step result. Author: Yuzuko Hosoya <hosoya.yuzuko@lab.ntt.co.jp> Reviewed-by: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> Discussion: https://postgr.es/m/00e601d4ca86$932b8bc0$b982a340$@lab.ntt.co.jp
src/test/kerberos and src/test/ldap need to run a private authentication server of the relevant type, for which they need a free TCP port. They were just picking a random port number in 48K-64K, which works except when something's already using the particular port. Notably, the probability of failure rises dramatically if one simply runs those tests in a tight loop, because each test cycle leaves behind a bunch of high ports that are transiently in TIME_WAIT state. To fix, split out the code that PostgresNode.pm already had for identifying a free TCP port number, so that it can be invoked to choose a port for the KDC or LDAP server. This isn't 100% bulletproof, since conceivably something else on the machine could grab the port between the time we check and the time we actually start the server. But that's a pretty short window, so in practice this should be good enough. Back-patch to v11 where these test suites were added. Patch by me, reviewed by Andrew Dunstan. Discussion: https://postgr.es/m/3397.1564872168@sss.pgh.pa.us
Perl has multiple internal representations of "undef", and just testing for SvTYPE(x) == SVt_NULL doesn't recognize all of them, leading to "cannot transform this Perl type to jsonb" errors. Use the approved test SvOK() instead. Report and patch by Ivan Panchenko. Back-patch to v11 where this module was added. Discussion: https://postgr.es/m/1564783533.324795401@f193.i.mail.ru
OWNER_TO was used for the completion, which is not a supported grammar, but OWNER TO is. This error has been introduced by d37b816, so backpatch down to 9.6. Author: Alexander Lakhin Discussion: https://postgr.es/m/7ab243e0-116d-3e44-d120-76b3df7abefd@gmail.com Backpatch-through: 9.6
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 6e5b36ec437a93cda602c581c48641e77a240f74
Commit aa27977 introduced this restriction for pg_temp.function_name(arg); do likewise for types created in temporary schemas. Programs that this breaks should add "pg_temp." schema qualification or switch to arg::type_name syntax. Back-patch to 9.4 (all supported versions). Reviewed by Tom Lane. Reported by Tom Lane. Security: CVE-2019-10208
Commit bf6c614 rearranged the lookup of the comparison operators needed in a hashed subplan, and in so doing, broke the cross-type case: it caused the original LHS-vs-RHS operator to be used to compare hash table entries too (which of course are all of the RHS type). This leads to C functions being passed a Datum that is not of the type they expect, with the usual hazards of crashes and unauthorized server memory disclosure. For the set of hashable cross-type operators present in v11 core Postgres, this bug is nearly harmless on 64-bit machines, which may explain why it escaped earlier detection. But it is a live security hazard on 32-bit machines; and of course there may be extensions that add more hashable cross-type operators, which would increase the risk. Reported by Andreas Seltenreich. Back-patch to v11 where the problem came in. Security: CVE-2019-10209
src/test/kerberos and src/test/ldap try to run private authentication servers, which of course might fail. The logs from these servers were being dropped into the tmp_check/ subdirectory, but they should be put in tmp_check/log/, because the buildfarm will only capture log files in that subdirectory. Without the log output there's little hope of diagnosing buildfarm failures related to these servers. Backpatch to v11 where these test suites were added. Discussion: https://postgr.es/m/16017.1565047605@sss.pgh.pa.us
contrib/intarray considers "arraycol <@ constant-array" to be indexable, but its GiST opclass code fails to reliably find index entries for empty array values (which of course should trivially match such queries). This is because the test condition to see whether we should descend through a non-leaf node is wrong. Unfortunately, empty array entries could be anywhere in the index, as these index opclasses are currently designed. So there's no way to fix this except by lobotomizing <@ indexscans to scan the whole index ... which is what this patch does. That's pretty unfortunate: the performance is now actually worse than a seqscan, in most cases. We'd be better off to remove <@ from the GiST opclasses entirely, and perhaps a future non-back-patchable patch will do so. In the meantime, applications whose performance is adversely impacted have a couple of options. They could switch to a GIN index, which doesn't have this bug, or they could replace "arraycol <@ constant-array" with "arraycol <@ constant-array AND arraycol && constant-array". That will provide about the same performance as before, and it will find all non-empty subsets of the given constant-array, which is all that could reliably be expected of the query before. While at it, add some more regression test cases to improve code coverage of contrib/intarray. In passing, adjust resize_intArrayType so that when it's returning an empty array, it uses construct_empty_array for that rather than cowboy hacking on the input array. While the hack produces an array that looks valid for most purposes, it isn't bitwise equal to empty arrays produced by other code paths, which could have subtle odd effects. I don't think this code path is performance-critical enough to justify such shortcuts. (Back-patch this part only as far as v11; before commit 01783ac we were not careful about this in other intarray code paths either.) Back-patch the <@ fixes to all supported versions, since this was broken from day one. Patch by me; thanks to Alexander Korotkov for review. Discussion: https://postgr.es/m/458.1565114141@sss.pgh.pa.us
When parsing a timetz string with a dynamic timezone abbreviation or a timezone not specified, it was possible to generate incorrect timestamps based on a date which uses some non-initialized variables if the input string did not specify fully a date to parse. This is already checked when a full timezone spec is included in the input string, but the two other cases mentioned above missed the same checks. This gets fixed by generating an error as this input is invalid, or in short when a date is not fully specified. Valgrind was complaining about this problem. Bug: #15910 Author: Alexander Lakhin Discussion: https://postgr.es/m/15910-2eba5106b9aa0c61@postgresql.org Backpatch-through: 9.4
In serializable mode, heap_hot_search_buffer() incorrectly acquired a predicate lock on the root tuple, not the returned tuple that satisfied the visibility checks. As explained in README-SSI, the predicate lock does not need to be copied or extended to other tuple versions, but for that to work, the correct, visible, tuple version must be locked in the first place. The original SSI commit had this bug in it, but it was fixed back in 2013, in commit 81fbbfe. But unfortunately, it was reintroduced a few months later in commit b89e151. Wising up from that, add a regression test to cover this, so that it doesn't get reintroduced again. Also, move the code that sets 't_self', so that it happens at the same time that the other HeapTuple fields are set, to make it more clear that all the code in the loop operate on the "current" tuple in the chain, not the root tuple. Bug spotted by Andres Freund, analysis and original fix by Thomas Munro, test case and some additional changes to the fix by Heikki Linnakangas. Backpatch to all supported versions (9.4). Discussion: https://www.postgresql.org/message-id/20190731210630.nqhszuktygwftjty%40alap3.anarazel.de
VACUUM's reference page had this text, but ANALYZE's didn't. That's a clear oversight given that section 5.7 explicitly delegates the responsibility to define permissions requirements to the individual commands' man pages. Per gripe from Isaac Morland. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAMsGm5fp3oBUs-2iRfii0iEO=fZuJALVyM2zJLhNTjG34gpAVQ@mail.gmail.com
openssl doesn't like lower case subject attribute names. Error observed in buildfarm results. Backpatch to release 11.
Reviewed by Tom Lane and Amit Langote Discussion: https://postgr.es/m/20190806222735.GA9535@alvherre.pgsql
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.