The timetz test cases I added in commit a9632830b were unintentionally
sensitive to whether or not DST is active in the PST8PDT time zone.
Thus, they'll start failing this coming weekend, as reported by
Bernhard M. Wiedemann in bug #16689. Fortunately, DST-awareness is
not significant to the purpose of these test cases, so we can just
force them all to PDT (DST hours) to preserve stability of the
results.
Back-patch to v10, as the prior patch was.
Discussion: https://postgr.es/m/16689-57701daa23b377bf@postgresql.org
In almost all other places, we use plain "r" or "w" mode in popen()
calls (the exceptions being for COPY data). This one has been
overlooked (possibly because it's buried in a ".l" flex file?),
but it's using PG_BINARY_R.
Kensuke Okamura complained in bug #16688 that we fail to strip \r
when stripping the trailing newline from a backtick result string.
That's true enough, but we'd also fail to convert embedded \r\n
cleanly, which also seems undesirable. Fixing the popen() mode
seems like the best way to deal with this.
It's been like this for a long time, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/16688-c649c7b69cd7e6f8@postgresql.org
It's unsafe to do this at parse time because addition of generated
columns to a table would not invalidate stored rules containing
UPDATEs on the table ... but there might now be dependent generated
columns that were not there when the rule was made. This also fixes
an oversight that rewriteTargetView failed to update extraUpdatedCols
when transforming an UPDATE on an updatable view. (Since the new
calculation is downstream of that, rewriteTargetView doesn't actually
need to do anything; but before, there was a demonstrable bug there.)
In v13 and HEAD, this leads to easily-visible bugs because (since
commit c6679e4fc) we won't recalculate generated columns that aren't
listed in extraUpdatedCols. In v12 this bitmap is mostly just used
for trigger-firing decisions, so you'd only notice a problem if a
trigger cared whether a generated column had been updated.
I'd complained about this back in May, but then forgot about it
until bug #16671 from Michael Paul Killian revived the issue.
Back-patch to v12 where this field was introduced. If existing
stored rules contain any extraUpdatedCols values, they'll be
ignored because the rewriter will overwrite them, so the bug will
be fixed even for existing rules. (But note that if someone were
to update to 13.1 or 12.5, store some rules with UPDATEs on tables
having generated columns, and then downgrade to a prior minor version,
they might observe issues similar to what this patch fixes. That
seems unlikely enough to not be worth going to a lot of effort to fix.)
Discussion: https://postgr.es/m/10206.1588964727@sss.pgh.pa.us
Discussion: https://postgr.es/m/16671-2fa55851859fb166@postgresql.org
Now that LOCK TABLE can take any relation type, acquire lock on all
relations that are to be dumped. This prevents schema changes or
deadlock errors that could cause a dump to fail after expending much
effort. The server is tested to have the capability and the feature
disabled if it doesn't, so that a patched pg_dump doesn't fail when
connecting to an unpatched server.
Backpatch to 9.5.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reported-by: Wells Oliver <wells.oliver@gmail.com>
Discussion: https://postgr.es/m/20201021200659.GA32358@alvherre.pgsql
The restriction that only tables and views can be locked by LOCK TABLE
is quite arbitrary, since the underlying mechanism can lock any relation
type. Drop the restriction so that programs such as pg_dump can lock
all relations they're interested in, preventing schema changes that
could cause a dump to fail after expending much effort.
Backpatch to 9.5.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reported-by: Wells Oliver <wells.oliver@gmail.com>
Discussion: https://postgr.es/m/20201021200659.GA32358@alvherre.pgsql
If the old row has any "missing" attributes that are supposed to
be retrieved from an associated tuple descriptor, the wrong things
happened because the trigger result is shoved directly into an
executor slot that lacks the missing-attribute data. Notably,
CHECK-constraint verification would incorrectly see those columns
as NULL, and so would RETURNING-list evaluation.
Band-aid around this by forcibly expanding the tuple before passing
it to the trigger function. (IMO it was a fundamental misdesign to
put the missing-attribute data into tuple constraints, which so
much of the system considers to be optional. But we're probably
stuck with that now, and will have to continue to apply band-aids
as we find other places with similar issues.)
Back-patch to v12. v11 would also have the issue, except that
commit 920311ab1 already applied a similar band-aid. That forced
expansion in more cases than seem really necessary, though, so
this isn't a directly equivalent fix.
Amit Langote, with some cosmetic changes by me
Discussion: https://postgr.es/m/16644-5da7ef98a7ac4545@postgresql.org
We must not set the "done" flag until after we've executed the
initialization function. Otherwise, other threads can fall through
the initial unlocked test before initialization is really complete.
This has been seen to cause rare failures of ecpg's thread/descriptor
test, and it could presumably cause other sorts of misbehavior in
threaded ECPG-using applications, since ecpglib relies on
pthread_once() in several places.
Diagnosis and patch by me, based on investigation by Alexander Lakhin.
Back-patch to all supported branches (the bug dates to 2007).
Discussion: https://postgr.es/m/16685-d6cd241872c101d3@postgresql.org
The ExplainCloseGroup arguments for incremental sort usage data
didn't match the corresponding ExplainOpenGroup. This only matters
for XML-format output, which is probably why we'd not noticed.
Daniel Gustafsson, per bug #16683 from Frits Jalvingh
Discussion: https://postgr.es/m/16683-8005033324ad34e9@postgresql.org
There's no functional change at all here, but I'm curious to see
whether this change successfully shuts up Coverity's warning about
a useless strcmp(), which appeared with the previous update.
Discussion: http://mm.icann.org/pipermail/tz/2020-October/029370.html
psql's \connect claims to be able to re-use previous connection
parameters, but in fact it only re-uses the database name, user name,
host name (and possibly hostaddr, depending on version), and port.
This is problematic for assorted use cases. Notably, pg_dump[all]
emits "\connect databasename" commands which we would like to have
re-use all other parameters. If such a script is loaded in a psql run
that initially had "-d connstring" with some non-default parameters,
those other parameters would be lost, potentially causing connection
failure. (Thus, this is the same kind of bug addressed in commits
a45bc8a4f and 8e5793ab6, although the details are much different.)
To fix, redesign do_connect() so that it pulls out all properties
of the old PGconn using PQconninfo(), and then replaces individual
properties in that array. In the case where we don't wish to re-use
anything, get libpq's default settings using PQconndefaults() and
replace entries in that, so that we don't need different code paths
for the two cases.
This does result in an additional behavioral change for cases where
the original connection parameters allowed multiple hosts, say
"psql -h host1,host2", and the \connect request allows re-use of the
host setting. Because the previous coding relied on PQhost(), it
would only permit reconnection to the same host originally selected.
Although one can think of scenarios where that's a good thing, there
are others where it is not. Moreover, that behavior doesn't seem to
meet the principle of least surprise, nor was it documented; nor is
it even clear it was intended, since that coding long pre-dates the
addition of multi-host support to libpq. Hence, this patch is content
to drop it and re-use the host list as given.
Per Peter Eisentraut's comments on bug #16604. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/16604-933f4b8791227b15@postgresql.org
More precisely, correctly handle the ONLY flag indicating not to
recurse. This was implemented in 86f575948c77 by recursing in
trigger.c, but that's the wrong place; use ATSimpleRecursion instead,
which behaves properly. However, because legacy inheritance has never
recursed in that situation, make sure to do that only for new-style
partitioning.
I noticed this problem while testing a fix for another bug in the
vicinity.
This has been wrong all along, so backpatch to 11.
Discussion: https://postgr.es/m/20201016235925.GA29829@alvherre.pgsql
In shm_mq_receive(), a huge payload could trigger an unjustified
"invalid memory alloc request size" error due to the way the buffer
size is increased.
Add error checks (documenting the upper limit) and avoid the error by
limiting the allocation size to MaxAllocSize.
Author: Markus Wanner <markus.wanner@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/3bb363e7-ac04-0ac4-9fe8-db1148755bfa%402ndquadrant.com
When told to process all databases, clusterdb, reindexdb, and vacuumdb
would reconnect by replacing their --maintenance-db parameter with the
name of the target database. If that parameter is a connstring (which
has been allowed for a long time, though we failed to document that
before this patch), we'd lose any other options it might specify, for
example SSL or GSS parameters, possibly resulting in failure to connect.
Thus, this is the same bug as commit a45bc8a4f fixed in pg_dump and
pg_restore. We can fix it in the same way, by using libpq's rules for
handling multiple "dbname" parameters to add the target database name
separately. I chose to apply the same refactoring approach as in that
patch, with a struct to handle the command line parameters that need to
be passed through to connectDatabase. (Maybe someday we can unify the
very similar functions here and in pg_dump/pg_restore.)
Per Peter Eisentraut's comments on bug #16604. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/16604-933f4b8791227b15@postgresql.org
Since commit 913bbd88d, check_sql_fn_retval() can either insert type
coercion steps in-line in the Query that produces the SQL function's
results, or generate a new top-level Query to perform the coercions,
if modifying the Query's output in-place wouldn't be safe. However,
it appears that the latter case has never actually worked, because
the code tried to inject the new Query back into the query list it was
passed ... which is not the list that will be used for later processing
when we execute the SQL function "normally" (without inlining it).
So we ended up with no coercion happening at run-time, leading to
wrong results or crashes depending on the datatypes involved.
While the regression tests look like they cover this area well enough,
through a huge bit of bad luck all the test cases that exercise the
separate-Query path were checking either inline-able cases (which
accidentally didn't have the bug) or cases that are no-ops at runtime
(e.g., varchar to text), so that the failure to perform the coercion
wasn't obvious. The fact that the cases that don't work weren't
allowed at all before v13 probably contributed to not noticing the
problem sooner, too.
To fix, get rid of the separate "flat" list of Query nodes and instead
pass the real two-level list that is going to be used later. I chose
to make the same change in check_sql_fn_statements(), although that has
no actual bug, just so that we don't need that data structure at all.
This is an API change, as evidenced by the adjustments needed to
callers outside functions.c. That's a bit scary to be doing in a
released branch, but so far as I can tell from a quick search,
there are no outside callers of these functions (and they are
sufficiently specific to our semantics for SQL-language functions that
it's not apparent why any extension would need to call them). In any
case, v13 already changed the API of check_sql_fn_retval() compared to
prior branches.
Per report from pinker. Back-patch to v13 where this code came in.
Discussion: https://postgr.es/m/1603050466566-0.post@n3.nabble.com
- Misc grammar and punctuation fixes.
- Stylistic cleanup: use spaces between function arguments and JSON fields
in examples. For example "foo(a,b)" -> "foo(a, b)". Add semicolon after
last END in a few PL/pgSQL examples that were missing them.
- Make sentence that talked about "..." and ".." operators more clear,
by avoiding to end the sentence with "..". That makes it look the same
as "..."
- Fix syntax description for HAVING: HAVING conditions cannot be repeated
Patch by Justin Pryzby, per Yaroslav Schekin's report. Backpatch to all
supported versions, to the extent that the patch applies easily.
Discussion: https://www.postgresql.org/message-id/20201005191922.GE17626%40telsasoft.com
The Windows documentation insists that every WSAStartup call should
have a matching WSACleanup call. However, if that ever had actual
relevance, it wasn't in this century. Every remotely-modern Windows
kernel is capable of cleaning up when a process exits without doing
that, and must be so to avoid resource leaks in case of a process
crash. Moreover, Postgres backends have done WSAStartup without
WSACleanup since commit 4cdf51e64 in 2004, and we've never seen any
indication of a problem with that.
libpq's habit of doing WSAStartup during connection start and
WSACleanup during shutdown is also rather inefficient, since a
series of non-overlapping connection requests leads to repeated,
quite expensive DLL unload/reload cycles. We document a workaround
for that (having the application call WSAStartup for itself), but
that's just a kluge. It's also worth noting that it's far from
uncommon for applications to exit without doing PQfinish, and
we've not heard reports of trouble from that either.
However, the real reason for acting on this is that recent
experiments by Alexander Lakhin show that calling WSACleanup
during PQfinish is triggering the symptom we occasionally see
that a process using libpq fails to emit expected stdio output.
Therefore, let's change libpq so that it calls WSAStartup only
once per process, during the first connection attempt, and never
calls WSACleanup at all.
While at it, get rid of the only other WSACleanup call in our code
tree, in pg_dump/parallel.c; that presumably is equally useless.
Back-patch of HEAD commit 7d00a6b2d.
Discussion: https://postgr.es/m/ac976d8c-03df-d6b8-025c-15a2de8d9af1@postgrespro.ru
In the planner, it was possible, given an extreme enough case containing a
large number of joins for the number of estimated rows to become infinite.
This could cause problems in initial_cost_mergejoin() where we perform
some calculations based on those row estimates.
A problem case, presented by Onder Kalaci showed an Assert failure from
an Assert checking outerstartsel <= outerendsel. In his test case this
was effectively NaN <= Inf, which is false. The NaN outerstartsel came
from multiplying the infinite outer_path_rows by 0.0.
In master, this problem was fixed by a90c950fc, however, that fix was too
invasive for the backbranches. Here we just relax the Asserts to allow
them to pass. The worst that appears to happen from this is that we show
NaN cost values and infinite row estimates in EXPLAIN. add_path() would
have had a hard time doing anything useful with such costs, but that does
not really matter as if the row estimates were even close to accurate,
such plan would not complete this side of the heat death of the universe.
Reported-by: Onder Kalaci
Backpatch: 9.5 to 13
Discussion: https://postgr.es/m/DM6PR21MB1211FF360183BCA901B27F04D80B0@DM6PR21MB1211.namprd21.prod.outlook.com
The rules to choose the number of parallel workers to perform parallel
vacuum operation were not clearly specified.
Reported-by: Peter Eisentraut
Author: Amit Kapila
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/36aa8aea-61b7-eb3c-263b-648e0cb117b7@2ndquadrant.com
When allocating a EVP context, it would have been possible to leak some
memory allocated directly by OpenSSL, that PostgreSQL lost track of if
the initialization of the context allocated failed. The cleanup can be
done with EVP_MD_CTX_destroy().
Note that EVP APIs exist since OpenSSL 0.9.7 and we have in the tree
equivalent implementations for older versions since ce9b75d (code
removed with 9b7cd59a as of 10~). However, in 9.5 and 9.6, the existing
code makes use of EVP_MD_CTX_destroy() and EVP_MD_CTX_create() without
an equivalent implementation when building the tree with OpenSSL 0.9.6
or older, meaning that this code is in reality broken with such versions
since it got introduced in e2838c5. As we have heard no complains about
that, it does not seem worth bothering with in 9.5 and 9.6, so I have
left that out for simplicity.
Author: Michael Paquier
Discussion: https://postgr.es/m/20201015072212.GC2305@paquier.xyz
Backpatch-through: 9.5
Section 8.5.1.4, which defines these literals, made only a vague
reference to the fact that they might be evaluated too soon to be
safe in non-interactive contexts. Provide a more explicit caution
against misuse. Also, generalize the wording in the related tip in
section 9.9.4: while it clearly described this problem, it implied
(or really, stated outright) that the problem only applies to table
DEFAULT clauses.
Per gripe from Tijs van Dam. Back-patch to all supported branches.
Discussion: https://postgr.es/m/c2LuRv9BiRT3bqIo5mMQiVraEXey_25B4vUn0kDqVqilwOEu_iVF1tbtvLnyQK7yDG3PFaz_GxLLPil2SDkj1MCObNRVaac-7j1dVdFERk8=@thalex.com
This changes zic's default output format from "-b fat" to "-b slim".
We were already using "slim" in v13/HEAD, so those branches drop
the explicit -b switch in the Makefiles. Instead, add an explicit
"-b fat" in v12 and before, so that we don't change the output file
format in those branches. (This is perhaps excessively conservative,
but we decided not to do so in a12079109, and I'll stick with that.)
Other non-cosmetic changes are to drop support for zic's long-obsolete
"-y" switch, and to ensure that strftime() does not change errno
unless it fails.
As usual with tzcode changes, back-patch to all supported branches.
In theory, the second px_find_digest call in px_crypt_md5 could fail
even though the first one succeeded, since resource allocation is
required. Don't skip testing for a failure. (If one did happen,
the likely result would be a crash rather than clean recovery from
an OOM failure.)
The code's been like this all along, so back-patch to all supported
branches.
Daniel Gustafsson
Discussion: https://postgr.es/m/AA8D6FE9-4AB2-41B4-98CB-AE64BA668C03@yesql.se
Commit a97e85f2b caused "exceed the available area" warnings in PDF
builds. Fine-tune colwidth values to avoid that.
Back-patch to 9.6, like the prior patch. (This is of dubious value
before v13, since we were far from free of such warnings in older
branches. But we might as well keep the SGML looking the same in all
branches.)
Per buildfarm.
Unfortunately in LLVM 3.9 LLVMGetAttributeCountAtIndex(func, index)
crashes when called with an index that has 0 attributes. Since there's
no way to work around this in the C API, add a small C++ wrapper doing
so.
The only reason this didn't fail before 72559438f92 is that there
always are function attributes...
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20201016001254.w2nfj7gd74jmb5in@alap3.anarazel.de
Backpatch: 11-, like 72559438f92
This commit required support for inline variable definition, which is
not a requirement.
RELEASE NOTE AUTHOR: the author of commit 3c0471b5fd
(pg_upgrade/tablespaces) was Justin Pryzby, not me.
Reported-by: Andres Freund
Discussion: https://postgr.es/m/20201016001959.h24fkywfubkv2pc5@alap3.anarazel.de
Backpatch-through: 9.5
Previously, if pg_upgrade failed, and the user recreated the cluster but
did not remove the new cluster tablespace directory, a later pg_upgrade
would fail since the new tablespace directory would already exists.
This adds error reporting for this during check.
Reported-by: Justin Pryzby
Discussion: https://postgr.es/m/20200925005531.GJ23631@telsasoft.com
Backpatch-through: 9.5
Previously we only copied the function attributes. That caused problems at
least on s390x: Because we didn't copy the 'zeroext' attribute for
ExecAggTransReparent()'s *IsNull parameters, expressions invoking it didn't
ensure that the upper bytes of the registers were zeroed. In the - relatively
rare - cases where not, ExecAggTransReparent() wrongly ended up in the
newValueIsNull branch due to the register not being zero. Subsequently causing
a crash.
It's quite possible that this would cause problems on other platforms, and in
other places than just ExecAggTransReparent() on s390x.
Thanks to Christoph (and the Debian project) for providing me with access to a
s390x machine, allowing me to debug this.
Reported-By: Christoph Berg
Author: Andres Freund
Discussion: https://postgr.es/m/20201015083246.kie5726xerdt3ael@alap3.anarazel.de
Backpatch: 11-, where JIT was added
While registering for postmaster exit events, we have to handle a couple
of edge cases where the postmaster is already gone. Commit 815c2f09
missed one: EACCES must surely imply that PostmasterPid no longer
belongs to our postmaster process (or alternatively an unexpected
permissions model has been imposed on us). Like ESRCH, this should be
treated as a WL_POSTMASTER_DEATH event, rather than being raised with
ereport().
No known problems reported in the wild. Per code review from Tom Lane.
Back-patch to 13.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/3624029.1602701929%40sss.pgh.pa.us
Previously it was documented that toast_tuple_target affected column
marked as only External or Extended. But this description is not correct
and toast_tuple_target affects also column marked as Main.
Back-patch to v11 where toast_tuple_target reloption was introduced.
Author: Shinya Okano
Reviewed-by: Tatsuhito Kasahara, Fujii Masao
Discussion: https://postgr.es/m/93f46e311a67422e89e770d236059817@oss.nttdata.com
I removed the duplicate command tags for START_REPLICATION inadvertently
in commit 07082b08cc5d, but the replication protocol requires them. The
fact that the replication protocol was broken was not noticed because
all our test cases use an optimized code path that exits early, failing
to verify that the behavior is correct for non-optimized cases. Put
them back.
Also document this protocol quirk.
Add a test case that shows the failure. It might still succeed even
without the patch when run on a fast enough server, but it suffices to
show the bug in enough cases that it would be noticed in buildfarm.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Henry Hinze <henry.hinze@gmail.com>
Reviewed-by: Petr Jelínek <petr.jelinek@2ndquadrant.com>
Discussion: https://postgr.es/m/16643-eaadeb2a1a58d28c@postgresql.org
If WaitEventSetWait() reports that the postmaster has gone away, later
calls to WaitEventSetWait() should continue to report that. Otherwise
further waits that occur in the proc_exit() path after we already
noticed the postmaster's demise could block forever.
Back-patch to 13, where the kqueue support landed.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/3624029.1602701929%40sss.pgh.pa.us
Our infinite_recurse() test to verify sane stack-overrun behavior
is affected by a bug of the Linux kernel on PPC64: it will get SIGSEGV
if it receives a signal when the stack depth is (a) over 1MB and
(b) within a few kB of filling the current physical stack allocation.
See https://bugzilla.kernel.org/show_bug.cgi?id=205183.
Since this test is a bit time-consuming and we run it in parallel with
test scripts that do a lot of DDL, it can be expected to get an sinval
catchup interrupt at some point, leading to failure if the timing is
wrong. This has caused more than 100 buildfarm failures over the
past year or so.
While a fix exists for the kernel bug, it might be years before that
propagates into all production kernels, particularly in some of the
older distros we have in the buildfarm. For now, let's just back off
and not run this test on Linux PPC64; that loses nothing in test
coverage so far as our own code is concerned.
To do that, split this test into a new script infinite_recurse.sql
and skip the test when the platform name is powerpc64...-linux-gnu.
Back-patch to v12. Branches before that have not been seen to get
this failure. No doubt that's because the "errors" test was not
run in parallel with other tests before commit 798070ec0, greatly
reducing the odds of an sinval catchup being necessary.
I also back-patched 3c8553547 into v12, just so the new regression
script would look the same in all branches having it.
Discussion: https://postgr.es/m/3479046.1602607848@sss.pgh.pa.us
Discussion: https://postgr.es/m/20190723162703.GM22387%40telsasoft.com
gistRelocateBuildBuffersOnSplit did not get the memo about which
attribute count to use. This could lead to a crash if there were
included columns and buffering build was chosen. (Because there
are random page-split decisions elsewhere in GiST index build,
the crashes are not entirely deterministic.)
Back-patch to v12 where GiST gained support for included columns.
Pavel Borisov
Discussion: https://postgr.es/m/CALT9ZEECCV5m7wvxg46PC-7x-EybUmnpupBGhSFMoAAay+r6HQ@mail.gmail.com
The prohibitValueChange code paths in set_config_option(), which
are executed whenever we re-read a PGC_POSTMASTER variable from
postgresql.conf, neglected to free anything before exiting. Thus
we'd leak the proposed new value of a PGC_STRING variable, as noted
by BoChen in bug #16666. For all variable types, if the check hook
creates an "extra" chunk, we'd also leak that.
These are malloc not palloc chunks, so there is no mechanism for
recovering the leaks before process exit. Fortunately, the values
are typically not very large, meaning you'd have to go through an
awful lot of SIGHUP configuration-reload cycles to make the leakage
amount to anything. Still, for a long-lived postmaster process it
could potentially be a problem.
Oversight in commit 2594cf0e8. Back-patch to all supported branches.
Discussion: https://postgr.es/m/16666-2c41a4eec61b03e1@postgresql.org
The implementation uses smaller code when the "expected" operand is a
small constant, but the implementation needlessly defined the set of
acceptable constants more narrowly than the ABI does. Core PostgreSQL
and PGXN don't use the constant path at all, so this is future-proofing.
Back-patch to v13, where commit 30ee5d17c20dbb282a9952b3048d6ad52d56c371
introduced this code.
Reviewed by Tom Lane. Reported by Christoph Berg.
Discussion: https://postgr.es/m/20201009092825.GD889580@msg.df7cb.de