When updating a relation with a rule whose action performed an INSERT
from a multi-row VALUES list, the rewriter might skip processing the
VALUES list, and therefore fail to replace any DEFAULTs in it. This
would lead to an "unrecognized node type" error.
The reason was that RewriteQuery() assumed that a query doing an
INSERT from a multi-row VALUES list would necessarily only have one
item in its fromlist, pointing to the VALUES RTE to read from. That
assumption is correct for the original query, but not for product
queries produced for rule actions. In such cases, there may be
multiple items in the fromlist, possibly including multiple VALUES
RTEs.
What is required instead is for RewriteQuery() to skip any RTEs from
the product query's originating query, which might include one or more
already-processed VALUES RTEs. What's left should then include at most
one VALUES RTE (from the rule action) to be processed.
Patch by me. Thanks to Tom Lane for reviewing.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/CAEZATCV39OOW7LAR_Xq4i%2BLc1Byux%3DeK3Q%3DHD_pF1o9LBt%3DphA%40mail.gmail.com
When the relkind of a relache entry changes, because a table is converted into
a view, pgstats can get confused in 15+, leading to crashes or assertion
failures.
For HEAD, Tom fixed this in b23cd185fd5, by removing support for converting a
table to a view, removing the source of the inconsistency. This commit just
adds an assertion that a relcache entry's relkind does not change, just in
case we end up with another case of that in the future. As there's no cases of
changing relkind anymore, we can't add a test that that's handled correctly.
For 15, fix the problem by not maintaining the association with the old pgstat
entry when the relkind changes during a relcache invalidation processing. In
that case the pgstat entry needs to be unlinked first, to avoid
PgStat_TableStatus->relation getting out of sync. Also add a test reproducing
the issues.
No known problem exists in 11-14, so just add the test there.
Reported-by: vignesh C <vignesh21@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CALDaNm2yXz+zOtv7y5zBd5WKT8O0Ld3YxikuU3dcyCvxF7gypA@mail.gmail.com
Discussion: https://postgr.es/m/CALDaNm3oZA-8Wbps2Jd1g5_Gjrr-x3YWrJPek-mF5Asrrvz2Dg@mail.gmail.com
Backpatch: 15-
Some options of these commands need to be able to identify the start
of the function body within the output of pg_get_functiondef().
It used to be that that always began with "AS", but since the
introduction of new-style SQL functions, it might also start with
"BEGIN" or "RETURN". Fix that on the psql side, and add some
regression tests.
Noted by me awhile ago, but I didn't do anything about it.
Thanks to David Johnston for a nag.
Discussion: https://postgr.es/m/AM9PR01MB8268D5CDABDF044EE9F42173FE8C9@AM9PR01MB8268.eurprd01.prod.exchangelabs.com
For Updates and Deletes, we were not honoring the columns list for old
tuple values while sending tuple data via pgoutput. This results in
pgoutput emitting more columns than expected.
This is not a problem for built-in logical replication as we simply ignore
additional columns based on the relation information sent previously which
didn't have those columns. However, some other users of pgoutput plugin
may expect the columns as per the column list. Also, sending extra columns
unnecessarily consumes network bandwidth defeating the purpose of the
column list feature.
Reported-by: Gunnar Morling
Author: Hou Zhijie
Reviewed-by: Amit Kapila
Backpatch-through: 15
Discussion: https://postgr.es/m/CADGJaX9kiRZ-OH0EpWF5Fkyh1ZZYofoNRCrhapBfdk02tj5EKg@mail.gmail.com
This has always worked, but you'd be unlikely to guess it
from the documentation. Add an example showing it.
Lack of docs noted by David Johnston. Back-patch to v13;
the documentation layout we used before that was not very
amenable to squeezing in multiple examples.
Discussion: https://postgr.es/m/CAKFQuwZ4Vy1Xty0G5Ok+ot=NDrU3C6hzF1JwUk-FEkwe3V9_RA@mail.gmail.com
Rearrange the discussion of user-private schemas so that details
applying only to upgraded-from-pre-v15 databases are in a follow-on
paragraph, not in the main description of how to set up this pattern.
This seems a little clearer even today, and it'll get more so as
pre-v15 systems fade into the sunset.
Wording contributions from Robert Haas, Tom Lane, Noah Misch.
Discussion: https://postgr.es/m/CA+TgmoYUHsfp90inEMAP0yNr7Y_L6EphPH1YOon1JKtBztXHyQ@mail.gmail.com
In commit 40c24bfef, I forgot to use get_rule_expr_paren() for the
arguments of AT TIME ZONE, resulting in possibly not printing parens
for expressions that need it. But get_rule_expr_paren() wouldn't have
gotten it right anyway, because isSimpleNode() hadn't been taught that
COERCE_SQL_SYNTAX parent nodes don't guarantee sufficient parentheses.
Improve all that. Also use this methodology for F_IS_NORMALIZED, so
that we don't print useless parens for that.
In passing, remove a comment that was obsoleted later.
Per report from Duncan Sands. Back-patch to v14 where this code
came in. (Before that, we didn't try to print AT TIME ZONE that way,
so there was no bug just ugliness.)
Discussion: https://postgr.es/m/f41566aa-a057-6628-4b7c-b48770ecb84a@deepbluecap.com
This doc patch (master hash 66bc9d2d3e) was decided to be too
significant for backpatching, so reverted in all but master. Also fix
SGML file header comment in master.
Reported-by: Peter Eisentraut
Discussion: https://postgr.es/m/c6304b19-6ff7-f3af-0148-cf7aa7e2fbfd@enterprisedb.com
Backpatch-through: 11
Writing "pg_regress --dbname= ..." led to a crash, because
we weren't expecting there to be no database name supplied.
It doesn't seem like a great idea to run regression tests
in whatever is the user's default database; so rather than
supporting this case let's explicitly reject it.
Per report from Xing Guo. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/CACpMh+A8cRvtvtOWVAZsCM1DU81GK4DL26R83y6ugZ1osV=ifA@mail.gmail.com
This also adds references to this new chapter at relevant sections of
our documentation. Previously much of these internal details were
exposed to users, but not explained. This also updates RELEASE
SAVEPOINT.
Discussion: https://postgr.es/m/CANbhV-E_iy9fmrErxrCh8TZTyenpfo72Hf_XD2HLDppva4dUNA@mail.gmail.com
Author: Simon Riggs, Laurenz Albe
Reviewed-by: Bruce Momjian
Backpatch-through: 11
The frontend-side routine in charge of building a SCRAM verifier
mentioned that the restrictions applying to SASLprep on the password
with the encoding are described at the top of fe-auth-scram.c, but this
information is in auth-scram.c.
This is wrong since 8f8b9be, so backpatch all the way down as this is an
important documentation bit.
Spotted while reviewing a different patch.
Backpatch-through: 11
Previously, we'd compress only when the active range of array entries
reached Max(4 * PROCARRAY_MAXPROCS, 2 * pArray->numKnownAssignedXids).
If max_connections is large, the first term could result in not
compressing for a long time, resulting in much wastage of cycles in
hot-standby backends scanning the array to take snapshots. Get rid
of that term, and just bound it to 2 * pArray->numKnownAssignedXids.
That however creates the opposite risk, that we might spend too much
effort compressing. Hence, consider compressing only once every 128
commit records. (This frequency was chosen by benchmarking. While
we only tried one benchmark scenario, the results seem stable over
a fairly wide range of frequencies.)
Also, force compression when processing RecoveryInfo WAL records
(which should be infrequent); the old code could perform compression
then, but would do so only after the same array-range check as for
the transaction-commit path.
Also, opportunistically run compression if the startup process is about
to wait for WAL, though not oftener than once a second. This should
prevent cases where we waste lots of time by leaving the array
not-compressed for long intervals due to low WAL traffic.
Lastly, add a simple check to keep us from uselessly compressing
when the array storage is already compact.
Back-patch, as the performance problem is worse in pre-v14 branches
than in HEAD.
Simon Riggs and Michail Nikolaev, with help from Tom Lane and
Andres Freund.
Discussion: https://postgr.es/m/CALdSSPgahNUD_=pB_j=1zSnDBaiOtqVfzo8Ejt5J_k7qZiU1Tw@mail.gmail.com
This is an oversight in commit 7c337b6b5: I apparently didn't think
about the possibility of a SQL function being executed multiple
times within a query. In that case, functions.c's primitive caching
mechanism allows the same utility parse tree to be presented for
execution more than once. We have to tell ProcessUtility to make
a working copy of the parse tree, or bad things happen.
Normally I'd add a regression test, but I think the reported crasher
is dependent on some rather random implementation choices that are
nowhere near functions.c, so its usefulness as a long-lived test
feels questionable. In any case, this fix is clearly correct given
the design choices of 7c337b6b5.
Per bug #17702 from Xin Wen. Thanks to Daniel Gustafsson for
analysis. Back-patch to v14 where the faulty commit came in
(before that, the responsibility for copying scribble-able
utility parse trees lay elsewhere).
Discussion: https://postgr.es/m/17702-ad24fdcdd1e9047a@postgresql.org
The JOIN_SEMI case Assert'ed that there are no PlaceHolderVars that
need to be evaluated at the semijoin's RHS, which is wrong because
there could be some in the semijoin's qual condition. However, there
could not be any references further up than that, and within the qual
there is not any way that such a PHV could have gone to null yet, so
we don't really need the PHV and there is no need to avoid making the
RHS-removal optimization. The upshot is that there's no actual bug
in production code, and we ought to just remove this misguided Assert.
While we're here, also drop the JOIN_RIGHT case, which is dead code
because reduce_outer_joins() already got rid of JOIN_RIGHT.
Per bug #17700 from Xin Wen. Uselessness of the JOIN_RIGHT case
pointed out by Richard Guo. Back-patch to v12 where this code
was added.
Discussion: https://postgr.es/m/17700-2b5c10d917c30687@postgresql.org
When loading plperl built against Strawberry perl or the msys2 ucrt perl
that have been built with gcc, a binary mismatch has been encountered
which looks like this:
loadable library and perl binaries are mismatched (got handshake key 0000000012800080, needed 0000000012900080)
To cure this we bring the handshake keys into sync by adding
NO_THREAD_SAFE_LOCALE to the defines used to build plperl.
Discussion: https://postgr.es/m/20211005004334.tgjmro4kuachwiuc@alap3.anarazel.de
Discussion: https://postgr.es/m/c2da86a0-2906-744c-923d-16da6047875e@dunslane.net
Backpatch to all live branches.
Commit 9b4eafcaf4 added creattion of a directory to reserve TAP test
ports at the top of the build tree. In a non-vpath build this means at
the top of the source tree, so it needs to be added to .gitignore.
As suggested by Michael Paquier
Backpatch to all live branches.
Strawberry uses __builtin_expect which Visual C doesn't have. For this
case define it as a noop. Solution taken from vim sources.
Backpatch to all live branches
Commit b663a4136, which allowed FDWs to INSERT rows in bulk, added to
nodeModifyTable.c code to flush pending inserts to the foreign-table
result relation(s) before completing processing of the ModifyTable node,
but the code failed to take into account the case where the INSERT query
has modifying CTEs, leading to incorrect results.
Also, that commit failed to flush pending inserts before firing BEFORE
ROW triggers so that rows are visible to such triggers.
In that commit we scanned through EState's
es_tuple_routing_result_relations or es_opened_result_relations list to
find the foreign-table result relations to which pending inserts are
flushed, but that would be inefficient in some cases. So to fix, 1) add
a List member to EState to record the insert-pending result relations,
and 2) modify nodeModifyTable.c so that it adds the foreign-table result
relation to the list in ExecInsert() if appropriate, and flushes pending
inserts properly using the list where needed.
While here, fix a copy-and-pasteo in a comment in ExecBatchInsert(),
which was added by that commit.
Back-patch to v14 where that commit appeared.
Discussion: https://postgr.es/m/CAPmGK16qutyCmyJJzgQOhfBq%3DNoGDqTB6O0QBZTihrbqre%2BoxA%40mail.gmail.com
In commit 272248a0c, we introduced an InitialRunningXacts array to
remember transactions and subtransactions that were running when the
xl_running_xacts record that we decoded was written. This array was
allocated in the snapshot builder memory context after we restore
serialized snapshot but we forgot to reset the array while freeing the
builder memory context. So, the next time when we start decoding in the
same session where we don't restore any serialized snapshot, we ended up
using the uninitialized array and that can lead to unpredictable behavior.
This problem doesn't exist in HEAD as instead of using
InitialRunningXacts, we added the list of transaction IDs and
sub-transaction IDs, that have modified catalogs and are running during
snapshot serialization, to the serialized snapshot (see commit 7f13ac8123).
Reported-by: Maxim Orlov
Author: Masahiko Sawada
Reviewed-by: Amit Kapila, Maxim Orlov
Backpatch-through: 11
Discussion: https://postgr.es/m/CACG=ezZoz_KG+Ryh9MrU_g5e0HiVoHocEvqFF=NRrhrwKmEQJQ@mail.gmail.com
We've made multiple attempts at preventing get_actual_variable_range
from taking an unreasonable amount of time (3ca930fc3, fccebe421).
But there's still an issue for the very first planning attempt after
deletion of a large number of extremal-valued tuples. While that
planning attempt will set "killed" bits on the tuples it visits and
thereby reduce effort for next time, there's still a lot of work it
has to do to visit the heap and then set those bits. It's (usually?)
not worth it to do that much work at plan time to have a slightly
better estimate, especially in a context like this where the table
contents are known to be mutating rapidly.
Therefore, let's bound the amount of work to be done by giving up
after we've visited 100 heap pages. Giving up just means we'll
fall back on the extremal value recorded in pg_statistic, so it
shouldn't mean that planner estimates suddenly become worthless.
Note that this means we'll still gradually whittle down the problem
by setting a few more index "killed" bits in each planning attempt;
so eventually we'll reach a good state (barring further deletions),
even in the absence of VACUUM.
Simon Riggs, per a complaint from Jakub Wartak (with cosmetic
adjustments by me). Back-patch to all supported branches.
Discussion: https://postgr.es/m/CAKZiRmznOwi0oaV=4PHOCM4ygcH4MgSvt8=5cu_vNCfc8FSUug@mail.gmail.com
Currently there is a race condition where if concurrent TAP tests both
test that they can open a port they will assume that it is free and use
it, causing one of them to fail. To prevent this we record a reservation
using an exclusive lock, and any TAP test that discovers a reservation
checks to see if the reserving process is still alive, and looks for
another free port if it is.
Ports are reserved in a directory set by the environment setting
PG_TEST_PORT_DIR, or if that doesn't exist a subdirectory of the top
build directory as set by Makefile.global, or its own
tmp_check directory.
The prove_check recipe in Makefile.global.in is extended to export
top_builddir to the TAP tests. This was already exported by the
prove_installcheck recipes.
Per complaint from Andres Freund
Backpatched from 9b4eafcaf4 to all live branches
Discussion: https://postgr.es/m/20221002164931.d57hlutrcz4d2zi7@awork3.anarazel.de
Once a logical slot has acquired a catalog_xmin, it doesn't let go of
it, even when invalidated by exceeding the max_slot_wal_keep_size, which
means that dead catalog tuples are not removed by vacuum anymore since
the point is invalidated, until the slot is dropped. This could be
catastrophic if catalog churn is high.
Change the computation of Xmin to ignore invalidated slots,
to prevent dead rows from accumulating.
Backpatch to 13, where slot invalidation appeared.
Author: Sirisha Chamarthi <sirichamarthi22@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://postgr.es/m/CAKrAKeUEDeqquN9vwzNeG-CN8wuVsfRYbeOUV9qKO_RHok=j+g@mail.gmail.com
The Hunspell project moved from Sourceforge to Github sometime
in 2016, so update our links to match the new URL. Backpatch
the doc changes to all supported versions.
Discussion: https://postgr.es/m/DC9A662A-360D-4125-A453-5A6CB9C6C4B4@yesql.se
Backpatch-through: v11
I just spent an annoying amount of time reverse-engineering the
100%-undocumented API between ts_headline and the text search
parser's prsheadline function. Add some commentary about that
while it's fresh in mind. Also remove some unused macros in
wparser_def.c.
While at it, I noticed that when commit 78e73e875 added a
CHECK_FOR_INTERRUPTS call in TS_execute_recurse, it missed
doing so in the parallel function TS_phrase_execute, which
surely needs one just as much.
Back-patch because of the missing CHECK_FOR_INTERRUPTS.
Might as well back-patch the rest of this too.
These functions have been marked parallel safe, but the buildfarm's
response to commit e2933a6e1 exposed the flaw in that thinking:
if you try to use them on a temporary table, and they run inside
a parallel worker, they'll fail with "cannot access temporary tables
during a parallel operation".
Fix that by marking them parallel restricted instead. Maybe someday
we'll have a better answer and can reverse this decision.
Back-patch to v15. To go back further, we'd have to devise variant
versions of pre-1.10 pageinspect versions. Given the lack of field
complaints, it doesn't seem worth the trouble. We'll just deem
this case unsupported pre-v15. (If anyone does complain, it might
be good enough to update the markings manually in their DBs.)
Discussion: https://postgr.es/m/E1ox94a-000EHu-VH@gemulon.postgresql.org
pageinspect has occasionally failed on slow buildfarm members,
with symptoms indicating that the expected effects of VACUUM
FREEZE didn't happen. This is presumably because a background
transaction such as auto-analyze was holding back global xmin.
We can work around that by using a temp table in the test.
Since commit a7212be8b, that will use an up-to-date cutoff xmin
regardless of other processes. And pageinspect itself shouldn't
really care whether the table is temp.
Back-patch to v14. There would be no point in older branches
without back-patching a7212be8b, which seems like more trouble
than the problem is worth.
Discussion: https://postgr.es/m/2892135.1668976646@sss.pgh.pa.us
ProcSleep() used a PGPROC* variable to point to PROC_QUEUE->links.next,
because that does "the right thing" with SHMQueueInsertBefore(). While that
largely works, it's certainly not correct and unnecessary - we can just use
SHM_QUEUE* to point to the insertion point.
Noticed when testing a 32bit of postgres with undefined behavior
sanitizer. UBSan noticed that sometimes the supposed PGPROC wasn't
sufficiently aligned (required since 46d6e5f5679, ensured indirectly, via
ShmemAllocRaw() guaranteeing cacheline alignment).
For now fix this by using a SHM_QUEUE* for the insertion point. Subsequently
we should replace all the use of PROC_QUEUE and SHM_QUEUE with ilist.h, but
that's a larger change that we don't want to backpatch.
Backpatch to all supported versions - it's useful to be able to run postgres
under UBSan.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20221117014230.op5kmgypdv2dtqsf@awork3.anarazel.de
Backpatch: 11-
The test output varies when debug_discard_caches is enabled,
because that causes extra executions of recomputeNamespacePath.
Maybe putting a hook in that was a bad idea, but as a stopgap,
just turn off debug_discard_caches in this test.
Per buildfarm (now that we have debug_discard_caches coverage
again). Back-patch to v15 where this module was added.
Discussion: https://postgr.es/m/2267406.1668804934@sss.pgh.pa.us
basics.source is supposed to be pretty closely in step with
the examples in chapter 2 of the tutorial, but I forgot to
update it in commit f05a5e000. Fix that, and adjust a couple
of other discrepancies that had crept in over time.
(I notice that advanced.source is nowhere near being in sync
with chapter 3, but I lack the ambition to do something
about that right now.)
Version strings with unequal numbers of parts were being compared
incorrectly. We cure this by treating a missing part in the shorter
version as 0.
per complaint from Jehan-Guillaume de Rorthais, but the fix is mine, not
his.
Discussion: https://postgr.es/m/20220628225325.53d97b8d@karst
Backpatch to release 14 where this code was introduced.
This commend references a struct that disappeared before MERGE was
merged ... and ExecDelete is not called by the committed MERGE anyway.
Revert to the original wording.
Backpatch to 15
During XLOG_HASH_SPLIT_ALLOCATE_PAGE replay, we were checking for a
cleanup lock on the new bucket page after acquiring an exclusive lock on
it and raising a PANIC error on failure. However, it is quite possible
that checkpointer can acquire the pin on the same page before acquiring a
lock on it, and then the replay will lead to an error. So instead, directly
acquire the cleanup lock on the new bucket page during
XLOG_HASH_SPLIT_ALLOCATE_PAGE replay operation.
Reported-by: Andres Freund
Author: Robert Haas
Reviewed-By: Amit Kapila, Andres Freund, Vignesh C
Backpatch-through: 11
Discussion: https://postgr.es/m/20220810022617.fvjkjiauaykwrbse@awork3.anarazel.de
The current code looks for the sample file in the source directory, but
it seems better to test against the installed sample file.
Backpatch to release 15 where the test was introduced.
Discussion: https://postgr.es/m/73eea68e-3b6f-5f63-6024-25ed26b52016@dunslane.net
Reviewed by Tom Lane, Alvaro Herrera, Michael Paquier.
Currently this only allows for one argument, which must be present, and
always returns a single string. With this change the following now all
work:
$all_config = $node->config_data;
%config_map = ($node->config_data);
$incdir = $node->config_data('--include-dir');
($incdir, $sharedir) = $node->config_data(
qw(--include-dir --share-dir));
Backpatch to release 15 where this was introduced.
Discussion: https://postgr.es/m/73eea68e-3b6f-5f63-6024-25ed26b52016@dunslane.net
Reviewed by Tom Lane, Alvaro Herrera, Michael Paquier.
Test files should now ignore has_wal_read_bug() so long as
wait_for_catchup() is their only known way of reaching the bug. That's
at least five files today, a number expected to grow over time. This
commit removes skip logic from three. By doing so, systems having the
bug regain the ability to catch other kinds of defects via those three
tests. The other two, 002_databases.pl and 031_recovery_conflict.pl,
have been unprotected. Back-patch to v15, where done_testing() first
became our standard.
Discussion: https://postgr.es/m/20221030031639.GA3082137@rfd.leadboat.com
The original report was concerned with a possible inconsistency
between the heap and the visibility map, which I was unable to
confirm. The concern has been retracted.
However, there did seem to be a torn page hazard when using
checksums. By not setting the heap page LSN during redo, the
protections of minRecoveryPoint were bypassed. Fixed, along with a
misleading comment.
It may have been impossible to hit this problem in practice, because
it would require a page tear between the checksum and the flags, so I
am marking this as a theoretical risk. But, as discussed, it did
violate expectations about the page LSN, so it may have other
consequences.
Backpatch to all supported versions.
Reported-by: Konstantin Knizhnik
Reviewed-by: Konstantin Knizhnik
Discussion: https://postgr.es/m/fed17dac-8cb8-4f5b-d462-1bb4908c029e@garret.ru
Backpatch-through: 11
The stanza "SET STORAGE may need to add a TOAST table" does not
test what it's supposed to, and hasn't done so since we added
the ability to store constant column default values as metadata.
We need to use a non-constant default to get the expected table
rewrite to actually happen.
Fix that, and add the missing checks that would have exposed the
problem to begin with.
Noted while reviewing a patch that made changes in this test case.
Back-patch to v11 where the problem came in.
In commit 450ee7012 I supposed that all platforms we now care about have
snprintf(), since that's required by C99. Turns out that Microsoft did
not get around to adding that until VS2015. We've dropped support for
VS2013 as of HEAD (cf 6203583b7), but not in the back branches, so add
a hack for this in the back branches only.
There's no easy shortcut to an exact emulation of standard snprintf
in VS2013, but fortunately we don't need one: this code was just fine
with using sprintf before 450ee7012, so we can make it do so again
on that platform (and any others where the problem might crop up).
Per bug #17681 from Daisuke Higuchi. Back-patch to v12, like the
previous patch.
Discussion: https://postgr.es/m/17681-485ba2ec13e7f392@postgresql.org
The comments atop seem to indicate that we always accumulate invalidation
messages in a top-level transaction which is neither required nor matches
with the code.
Author: Amit Kapila
Reviewd by: Masahiko Sawada
Backpatch-through: 14, where it was introduced in commit c55040ccd0
Discussion: https://postgr.es/m/CAA4eK1LxGgnUroPz8STb6OfjVU1yaHoSA+T63URwmGCLdMJ0LA@mail.gmail.com