The testclient and uri-regress programs in the libpq test suite had
quite generic names which didn't convey much meaning. Since they are
installed as part of the MSVC test runs, ensure that their purpose
is a little bit clearer by renaming with a libpq_ prefix. While at
it rename uri-regress to uri_regress to avoid mixing dash and under-
score in the same filename.
Reported-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/20220501080706.GA1542365@rfd.leadboat.com
The tests added in 9f8a050f68d failed nearly reliably on FreeBSD in CI, and
occasionally on the buildfarm. That turns out to be caused not by a bug in the
test, but by a longstanding bug in recovery conflict handling.
The standby timeout handler, used by ResolveRecoveryConflictWithBufferPin(),
executed SendRecoveryConflictWithBufferPin() inside a signal handler. A bad
idea, because the deadlock timeout handler (or a spurious latch set) could
have interrupted ProcWaitForSignal(). If unlucky that could cause a
self-deadlock on ProcArrayLock, if the deadlock check is in
SendRecoveryConflictWithBufferPin()->CancelDBBackends().
To fix, set a flag in StandbyTimeoutHandler(), and check the flag in
ResolveRecoveryConflictWithBufferPin().
Subsequently the recovery conflict tests will be backpatched.
Discussion: https://postgr.es/m/20220413002626.udl7lll7f3o7nre7@alap3.anarazel.de
Backpatch: 10-
The recovery conflict tests added in 9f8a050f68d surfaced a bug in the
interaction between buffer pin and deadlock recovery conflicts. To make sure
that the bugfix won't break deadlock conflict detection, add a test for that
scenario.
031_recovery_conflict.pl will later be backpatched, with this included.
Discussion: https://postgr.es/m/20220413002626.udl7lll7f3o7nre7@alap3.anarazel.de
Instability in the test for pg_walinspect revealed that
pg_get_wal_records_info_till_end_of_wal(x) would try to decode all the
records with a start LSN earlier than the flush LSN, even though that
might include a partial record at the end of the range. In that case,
read_local_xlog_page_no_wait() would return NULL when it tried to read
past the flush LSN, which would be interpreted as an error by the
caller. That caused a test failure only on a BF animal that had been
restarted recently, but could be expected to happen in the wild quite
easily depending on the alignment of various parameters.
Fix by using private data in read_local_xlog_page_no_wait() to signal
end-of-wal to the caller, so that it can be properly distinguished
from a real error.
Discussion: https://postgr.es/m/Ymd/e5eeZMNAkrXo%40paquier.xyz
Discussion: https://postgr.es/m/111657.1650910309@sss.pgh.pa.us
Authors: Thomas Munro, Bharath Rupireddy.
I noticed that plpgsql would allow assignment of a new value to a
variable even when that variable is marked CONSTANT, if the variable
is used as an output parameter in CALL or is a refcursor variable
that OPEN assigns a new value to. Fix these oversights.
In the CALL case, the check has to be done at runtime because we
cannot know at parse time which parameters are OUT parameters.
For OPEN, it seems best to likewise enforce at runtime because
then we needn't throw error if the variable has a nonnull value
(since OPEN will only try to overwrite a null value).
Although this is surely a bug fix, no back-patch: it seems unlikely
that anyone would thank us for breaking formerly-working code in
minor releases.
Discussion: https://postgr.es/m/214453.1651182729@sss.pgh.pa.us
mark_async_capable_plan(), which is called from create_append_plan() to
determine whether subplans are async-capable, failed to take into
account that the given subplan created from a given subpath might
include a gating Result node if the subpath is a SubqueryScanPath or
ForeignPath, causing a segmentation fault there when the subplan created
from a SubqueryScanPath includes the Result node, or causing
ExecAsyncRequest() to throw an error about an unrecognized node type
when the subplan created from a ForeignPath includes the Result node,
because in the latter case the Result node was unintentionally
considered as async-capable, but we don't currently support executing
Result nodes asynchronously. Fix by modifying mark_async_capable_plan()
to disable asynchronous execution in such cases. Also, adjust code in
the ProjectionPath case in mark_async_capable_plan(), for consistency
with other cases, and adjust/improve comments there.
is_async_capable_path() added in commit 27e1f1456, which was rewritten
to mark_async_capable_plan() in a later commit, has the same issue,
causing the error at execution mentioned above, so back-patch to v14
where the aforesaid commit went in.
Per report from Justin Pryzby.
Etsuro Fujita, reviewed by Zhihong Yu and Justin Pryzby.
Discussion: https://postgr.es/m/20220408124338.GK24419%40telsasoft.com
This reverts commits 2c902bb and ccfbd92. Per buildfarm members
kestrel, rorqual and calliphoridae, the assertions checking that a TLI
history file should not exist when created by a WAL receiver have been
failing, and switching to durable_rename() over durable_rename_excl()
would cause the newest TLI history file to overwrite the existing one.
We need to think harder about such cases, so revert the new logic for
now.
Note that all the failures have been reported in the test
025_stuck_on_old_timeline.
Discussion: https://postgr.es/m/511362.1651116498@sss.pgh.pa.us
ccfbd92 has replaced all existing in-core callers of this function in
favor of durable_rename(). durable_rename_excl() is by nature unsafe on
crashes happening at the wrong time, so just remove it.
Author: Nathan Bossart
Reviewed-by: Robert Haas, Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20220407182954.GA1231544@nathanxps13
durable_rename_excl() attempts to avoid overwriting any existing files
by using link() and unlink(), falling back to rename() on some platforms
(e.g., Windows where link() followed by unlink() is not concurrent-safe,
see 909b449). Most callers of durable_rename_excl() use it just in case
there is an existing file, but it happens that for all of them we never
expect a target file to exist (WAL segment recycling, creation of
timeline history file and basic_archive).
basic_archive used durable_rename_excl() to avoid overwriting an archive
concurrently created by another server. Now, there is a stat() call to
avoid overwriting an existing archive a couple of lines above, so note
that this change opens a small TOCTOU window in this module between the
stat() call and durable_rename().
Furthermore, as mentioned in the top comment of durable_rename_excl(),
this routine can result in multiple hard links to the same file and data
corruption, with two or more links to the same file in pg_wal/ if a
crash happens before the unlink() call during WAL recycling.
Specifically, this would produce links to the same file for the current
WAL file and the next one because the half-recycled WAL file was
re-recycled during crash recovery of a follow-up cluster restart.
This change replaces all calls to durable_rename_excl() with
durable_rename(). This removes the protection against accidentally
overwriting an existing file, but some platforms are already living
without it, and all those code paths never expect an existing file (a
couple of assertions are added to check after that, in case).
This is a bug fix, but knowing the unlikeliness of the problem involving
one of more crashes at an exceptionally bad moment, no backpatch is
done. This could be revisited in the future.
Author: Nathan Bossart
Reviewed-by: Robert Haas, Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20220407182954.GA1231544@nathanxps13
Unlike existing WRITE_*_ARRAY macros, WRITE_INDEX_ARRAY needs to
handle the case that the field is NULL. We already have the
convention to print NULL fields as "<>", so we do that here as well.
There is currently no corresponding read function for this, so reading
this back in is not implemented, but it could be if needed.
Reported-by: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/CAMbWs4-LN%3DbF8f9eU2R94dJtF54DfDvBq%2BovqHnOQqbinYDrUw%40mail.gmail.com
This commit adds two isolation tests for CLUSTER, using:
- A normal table, making sure that CLUSTER blocks and completes if the
table is locked by a concurrent session.
- A partitioned table with a partition owned by a different user. If
the partitioned table is locked by a concurrent session, CLUSTER on the
partitioned table should block. If the partition owned by a different
user is locked, CLUSTER on its partitioned table should complete and
skip the partition. 3f19e17 has added an early check to ignore such a
partition with a SQL regression test, but this was not checking that
CLUSTER should not block.
Discussion: https://postgr.es/m/YlqveniXn9AI6RFZ@paquier.xyz
For some reason by default the mingw C Runtime takes it upon itself to
expand program arguments that look like shell globbing characters. That
has caused much scratching of heads and mis-attribution of the causes of
some TAP test failures, so stop doing that.
This removes an inconsistency with Windows binaries built with MSVC,
which have no such behaviour.
Per suggestion from Noah Misch.
Backpatch to all live branches.
Discussion: https://postgr.es/m/20220423025927.GA1274057@rfd.leadboat.com
Another test is constructed on top of regression tests, which does not
work correctly with unlogged tables. For now, cope with that by making
sure no unlogged table is left behind.
Per buildfarm pink after 4fb5c794e586.
Several places didn't do it, and in many cases it didn't matter because
it would be a small allocation in a short-lived context; but other
places may accumulate a few (for example, in CreateDatabaseUsingFileCopy,
one per tablespace). In most databases this is highly unlikely to be
very serious either, but it seems better to make the code consistent in
case there's future copy-and-paste.
The only case of actual concern seems to be the aforementioned routine,
which is new with commit 9c08aea6a309, so there's no need to backpatch.
As pointed out by Coverity.
This function looks for a reference to the recursive WITH CTE,
but it checked only the CTE name not ctelevelsup, so that it could
seize on a lower CTE that happened to have the same name. This
would result in planner failures later, either weird errors such as
"could not find attribute 2 in subquery targetlist", or crashes
or assertion failures. The code also merely Assert'ed that it found
a matching entry, which is not guaranteed at all by the parser.
Per bugs #17320 and #17318 from Zhiyong Wu.
Thanks to Kyotaro Horiguchi for investigation.
Discussion: https://postgr.es/m/17320-70e37868182512ab@postgresql.org
Discussion: https://postgr.es/m/17318-2eb65a3a611d2368@postgresql.org
Commit d2d35479796c3510e249d6fc72adbd5df918efbf included a pretty
extensive set of test cases, and some of them don't work on all
of our Windows machines. This happens because IPC::Run expands
its arguments as shell globs on a few machines, but doesn't on most
of the buildfarm. It might be good to fix that problem systematically
somehow, but in the meantime, there are enough test cases for this
commit that it seems OK to just remove the ones that are failing.
Discussion: http://postgr.es/m/3a190754-b2b0-d02b-dcfd-4ec1610ffbcb@dunslane.net
Discussion: http://postgr.es/m/CA+TgmoYRGUcFBy6VgN0+Pn4f6Wv=2H0HZLuPHqSy6VC8Ba7vdg@mail.gmail.com
697492434 added 3 new qsort specialization functions aimed to improve the
performance of sorting many of the common pass-by-value data types when
they're the leading or only sort key.
Unfortunately, that has caused a performance regression when sorting
datasets where many of the values being compared were equal. What was
happening here was that we were falling back to the standard sort
comparison function to handle tiebreaks. When the two given Datums
compared equally we would incur both the overhead of an indirect function
call to the standard comparer to perform the tiebreak and also the
standard comparer function would go and compare the leading key needlessly
all over again.
Here improve the situation in the 3 new comparison functions. We now
return 0 directly when the two Datums compare equally and we're performing
a 1-key sort.
Here we don't do anything to help the multi-key sort case where the
leading key uses one of the sort specializations functions. On testing
this case, even when the leading key's values are all equal, there
appeared to be no performance regression. Let's leave it up to future
work to optimize that case so that the tiebreak function no longer
re-compares the leading key over again.
Another possible fix for this would have been to add 3 additional sort
specialization functions to handle single-key sorts for these
pass-by-value types. The reason we didn't do that here is that we may
deem some other sort specialization to be more useful than single-key
sorts. It may be impractical to have sort specialization functions for
every single combination of what may be useful and it was already decided
that further analysis into which ones are the most useful would be delayed
until the v16 cycle. Let's not let this regression force our hand into
trying to make that decision for v15.
Author: David Rowley
Reviewed-by: John Naylor
Discussion: https://postgr.es/m/CA+hUKGJRbzaAOUtBUcjF5hLtaSHnJUqXmtiaLEoi53zeWSizeA@mail.gmail.com
inline_cte() expected to find exactly as many references to the
target CTE as its cterefcount indicates. While that should be
accurate for the tree as emitted by the parser, there are some
optimizations that occur upstream of here that could falsify it,
notably removal of unused subquery output expressions.
Trying to make the accounting 100% accurate seems expensive and
doomed to future breakage. It's not really worth it, because
all this code is protecting is downstream assumptions that every
referenced CTE has a plan. Let's convert those assertions to
regular test-and-elog just in case there's some actual problem,
and then drop the failing assertion.
Per report from Tomas Vondra (thanks also to Richard Guo for
analysis). Back-patch to v12 where the faulty code came in.
Discussion: https://postgr.es/m/29196a1e-ed47-c7ca-9be2-b1c636816183@enterprisedb.com
Commit 618c16707 invented an "error_result" flag in PGconn, which
intends to represent the state that we have an error condition and
need to build a PGRES_FATAL_ERROR PGresult from the message text in
conn->errorMessage, but have not yet done so. (Postponing construction
of the error object simplifies dealing with out-of-memory conditions
and with concatenation of messages for multiple errors.) For nearly all
purposes, this "virtual" PGresult object should act the same as if it
were already materialized. But a couple of places in fe-protocol3.c
didn't get that memo, and were only testing conn->result as they used
to, without also checking conn->error_result.
In hopes of reducing the probability of similar mistakes in future,
I invented a pgHavePendingResult() macro that includes both tests.
Per report from Peter Eisentraut.
Discussion: https://postgr.es/m/b52277b9-fa66-b027-4a37-fb8989c73ff8@enterprisedb.com
Commit aa0105141 assigned fixed OIDs to template0 and postgres
in a very ad-hoc way. Notably, instead of teaching Catalog.pm
about these OIDs, the unused_oids script was just hacked to
not show them as unused. That's problematic since, for example,
duplicate_oids wouldn't report any future conflict. Hence,
invent a macro DECLARE_OID_DEFINING_MACRO() that can be used to
define an OID that is known to Catalog.pm and will participate
in duplicate-detection as well as renumbering by renumber_oids.pl.
(We don't anticipate renumbering these particular OIDs, but we
might as well build out all the Catalog.pm infrastructure while
we're here.)
Another issue is that aa0105141 neglected to touch IsPinnedObject,
with the result that it now claimed template0 and postgres are
pinned. The right thing to do there seems to be to teach it that
no database is pinned, since in fact DROP DATABASE doesn't check
for pinned-ness (and at least for these cases, that is an
intentional choice). It's not clear whether this wrong answer
had any visible effect, but perhaps it could have resulted in
erroneous management of dependency entries.
In passing, rename the TemplateDbOid macro to Template1DbOid
to reduce confusion (likely we should have done that way back
when we invented template0, but we didn't), and rename the
OID macros for template0 and postgres to have a similar style.
There are no changes to postgres.bki here, so no need for a
catversion bump.
Discussion: https://postgr.es/m/2935358.1650479692@sss.pgh.pa.us
Some places used ZSTD, which isn't widely used anywhere. Use ZSTD only
to refer to the environment variable; use zstd (all lowercase) to refer
to the utility.
Per complaint from Justin Pryzby.
Discussion: https://postgr.es/m/20220414003301.GT26620@telsasoft.com
This is needed so that renumber_oids.pl can handle renumbering
shared catalog declarations, which need to provide C macros for
the OIDs of the shared toast table and index. The previous
method of writing a C macro separately was error-prone anyway.
Also teach renumber_oids.pl about DECLARE_UNIQUE_INDEX_PKEY,
as we missed doing when inventing that macro.
There are no changes to postgres.bki here, so no need for a
catversion bump.
Discussion: https://postgr.es/m/2995325.1650487527@sss.pgh.pa.us
Previously, we allowed this, but such triggers might query the table to
insert into and act differently if the tuples that have already been
processed and prepared for insertion are not there, so disable it in
such cases.
Back-patch to v14 where batch insert was added.
Discussion: https://postgr.es/m/CAPmGK16_uPqsmgK0-LpLSUk54_BoK13bPrhxhfjSoSTVz414hA%40mail.gmail.com
CLUSTER sort won't use the datum1 SortTuple field when clustering
against an index whose leading key is an expression. This makes it
unsafe to use the abbreviated keys optimization, which was missed by the
logic that sets up SortSupport state. Affected tuplesorts output tuples
in a completely bogus order as a result (the wrong SortSupport based
comparator was used for the leading attribute).
This issue is similar to the bug fixed on the master branch by recent
commit cc58eecc5d. But it's a far older issue, that dates back to the
introduction of the abbreviated keys optimization by commit 4ea51cdfe8.
Backpatch to all supported versions.
Author: Peter Geoghegan <pg@bowt.ie>
Author: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CA+hUKG+bA+bmwD36_oDxAoLrCwZjVtST2fqe=b4=qZcmU7u89A@mail.gmail.com
Backpatch: 10-
Such cases will lead to infinite loops, so they're of no practical
value. The numeric variant of generate_series() already threw error
for this, so borrow its message wording.
Per report from Richard Wesley. Back-patch to all supported branches.
Discussion: https://postgr.es/m/91B44E7B-68D5-448F-95C8-B4B3B0F5DEAF@duckdblabs.com
psql, pg_dump, and pg_amcheck share code to process object name
patterns like 'foo*.bar*' to match all tables with names starting in
'bar' that are in schemas starting with 'foo'. Before v14, any number
of extra name parts were silently ignored, so a command line '\d
foo.bar.baz.bletch.quux' was interpreted as '\d bletch.quux'. In v14,
as a result of commit 2c8726c4b0a496608919d1f78a5abc8c9b6e0868, we
instead treated this as a request for table quux in a schema named
'foo.bar.baz.bletch'. That caused problems for people like Justin
Pryzby who were accustomed to copying strings of the form
db.schema.table from messages generated by PostgreSQL itself and using
them as arguments to \d.
Accordingly, revise things so that if an object name pattern contains
more parts than we're expecting, we throw an error, unless there's
exactly one extra part and it matches the current database name.
That way, thisdb.myschema.mytable is accepted as meaning just
myschema.mytable, but otherdb.myschema.mytable is an error, and so
is some.random.garbage.myschema.mytable.
Mark Dilger, per report from Justin Pryzby and discussion among
various people.
Discussion: https://www.postgresql.org/message-id/20211013165426.GD27491%40telsasoft.com
Historically we've been lax about this, but seeing that we're not
lax in C files, there doesn't seem to be a good reason to be so
in the documentation. Remove the existing occurrences (mostly
though not entirely in copied-n-pasted psql output), and modify
.gitattributes so that "git diff --check" will warn about future
cases.
While at it, add *.pm to the set of extensions .gitattributes
knows about, and remove some obsolete entries for files that
we don't have in the tree anymore.
Per followup discussion of commit 5a892c9b1.
Discussion: https://postgr.es/m/E1nfcV1-000kOR-E5@gemulon.postgresql.org
Coverity complains that dpns->outer_plan is deferenced (to obtain
->targetlist) when possibly NULL. We can avoid this by using
dpns->outer_tlist instead, which was already obtained a few lines up.
The fact that we end up with
dpns->inner_tlist = dpns->outer_tlist
is a bit suspicious-looking and maybe worthy of more investigation, but
I'll leave that for another day.
Reviewed-by: Michaël Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/202204191345.qerjy3kxi3eb@alvherre.pgsql
Should have been done this way to start with, but I failed to notice
This way we avoid some pointless initialization, and better contains the
variable to exist in the scope where it is really used.
Reviewed-by: Michaël Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/202204191345.qerjy3kxi3eb@alvherre.pgsql
There's no reason to keep a separate local variable when we have a place
for it elsewhere. This allows to simplify some code.
Reviewed-by: Michaël Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/202204191345.qerjy3kxi3eb@alvherre.pgsql
We have some streaming tests that rely on the size of changes which can
fail if there are additional changes like invalidation messages by
background activity like auto analyze. Avoid such failures by increasing
autovacuum_naptime to a reasonably high value (1d).
Author: Dilip Kumar
Backpatch-through: 14
Discussion: https://postgr.es/m/1958043.1650129119@sss.pgh.pa.us
Similarly to what was done in 04539e73f, we standardized on SQL being
pronounced "es-que-ell" rather than "sequel" in our documentation.
Two inconsistencies have crept in during the v15 cycle. The others
existed before but were missed in 04539e73f due to none of the searches
accounting for "SQL" being wrapped in tags.
As with 04539e73f, we don't touch code comments here in order to not
create unnecessary back-patching pain.
Discussion: https://postgr.es/m/CAApHDvpML27UqFXnrYO1MJddsKVMQoiZisPvsAGhKE_tsKXquw%40mail.gmail.com
An ALTER FUNCTION command that tried to update both the function's
proparallel property and its proconfig list failed to do the former,
because it stored the new proparallel value into a tuple that was
no longer the interesting one. Carelessness in 7aea8e4f2.
(I did not bother with a regression test, because the only likely
future breakage would be for someone to ignore the comment I added
and add some other field update after the heap_modify_tuple step.
A test using existing function properties could not catch that.)
Per report from Bryn Llewellyn. Back-patch to all supported branches.
Discussion: https://postgr.es/m/8AC9A37F-99BD-446F-A2F7-B89AD0022774@yugabyte.com
The new numeric code for extract epoch from interval accidentally
truncated the DAYS_PER_YEAR value to an integer, leading to results
that mismatched the floating-point interval_part calculations.
The commit a2da77cdb4661826482ebf2ddba1f953bc74afe4 that introduced
this actually contains the regression test change that this reverts.
I suppose this was missed at the time.
Reported-by: Joseph Koshakow <koshy44@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/CAAvxfHd5n%3D13NYA2q_tUq%3D3%3DSuWU-CufmTf-Ozj%3DfrEgt7pXwQ%40mail.gmail.com
Remove meaningless "failures" column from the aggregate logging. It
was just a sum of "serialization failures" and "deadlock failures".
Pointed out by Tom Lane. Patch reviewed by Fabien COELHO.
Discussion: https://postgr.es/m/4183048.1649536705%40sss.pgh.pa.us