OpenSSL 3 introduced the concept of providers to support modularization,
and moved the outdated ciphers to the new legacy provider. In case it's
not loaded in the users openssl.cnf file there will be a lot of regress
test failures, so add alternative outputs covering those.
Also document the need to load the legacy provider in order to use older
ciphers with OpenSSL-enabled pgcrypto.
This will be backpatched to all supported version once there is sufficient
testing in the buildfarm of OpenSSL 3.
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/FEF81714-D479-4512-839B-C769D2605F8A@yesql.se
Backpatch-through: 9.6
The PX layer in pgcrypto is handling digest padding on its own uniformly
for all backend implementations. Starting with OpenSSL 3.0.0, DecryptUpdate
doesn't flush the last block in case padding is enabled so explicitly
disable it as we don't use it.
This will be backpatched to all supported version once there is sufficient
testing in the buildfarm of OpenSSL 3.
Reviewed-by: Peter Eisentraut, Michael Paquier
Discussion: https://postgr.es/m/FEF81714-D479-4512-839B-C769D2605F8A@yesql.se
Backpatch-through: 9.6
This has previously not been a problem (that anyone ever reported),
but in future OpenSSL versions (3.0.0), where legacy ciphers are/can
be disabled, this is the place where this is reported. So we need to
catch the error here, otherwise the higher-level functions would
return garbage. The nearby encryption code already handled errors
similarly.
Author: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/9e9c431c-0adc-7a6d-9b1a-915de1ba3fe7@enterprisedb.com
Backpatch-through: 9.6
As a result of confusion about whether the "char" type is signed or
unsigned, scans for index searches like "col < 'x'" or "col <= 'x'"
would start at the middle of the index not the left end, thus missing
many or all of the entries they should find. Fortunately, this
is not a symptom of index corruption. It's only the search logic
that is broken, and we can fix it without unpleasant side-effects.
Per report from Jason Kim. This has been wrong since btree_gin's
beginning, so back-patch to all supported branches.
Discussion: https://postgr.es/m/20210810001649.htnltbh7c63re42p@jasonk.me
The error messages using the word "non-negative" are confusing
because it's ambiguous about whether it accepts zero or not.
This commit improves those error messages by replacing it with
less ambiguous word like "greater than zero" or
"greater than or equal to zero".
Also this commit added the note about the word "non-negative" to
the error message style guide, to help writing the new error messages.
When postgres_fdw option fetch_size was set to zero, previously
the error message "fetch_size requires a non-negative integer value"
was reported. This error message was outright buggy. Therefore
back-patch to all supported versions where such buggy error message
could be thrown.
Reported-by: Hou Zhijie
Author: Bharath Rupireddy
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/OS0PR01MB5716415335A06B489F1B3A8194569@OS0PR01MB5716.jpnprd01.prod.outlook.com
As in 50371df26, this is a bad idea since the callback can't really
know what error is being thrown and thus whether or not it is safe
to attempt catalog accesses. Rather than pushing said accesses into
the mainline code where they'd usually be a waste of cycles, we can
look at the query's rangetable instead.
This change does mean that we'll be printing query aliases (if any
were used) rather than the table or column's true name. But that
doesn't seem like a bad thing: it's certainly a more useful definition
in self-join cases, for instance. In any case, it seems unlikely that
any applications would be depending on this detail, so it seems safe
to change.
Patch by me. Original complaint by Andres Freund; Bharath Rupireddy
noted the connection to conversion_error_callback.
Discussion: https://postgr.es/m/20210106020229.ne5xnuu6wlondjpe@alap3.anarazel.de
Previously, isolationtester displayed SQL query results using some
ad-hoc code that clearly hadn't had much effort expended on it.
Field values longer than 14 characters weren't separated from
the next field, and usually caused misalignment of the columns
too. Also there was no visual separation of a query's result
from subsequent isolationtester output. This made test result
files confusing and hard to read.
To improve matters, let's use libpq's PQprint() function. Although
that's long since unused by psql, it's still plenty good enough
for the purpose here.
Like 741d7f104, back-patch to all supported branches, so that this
isn't a stumbling block for back-patching isolation test changes.
Discussion: https://postgr.es/m/582362.1623798221@sss.pgh.pa.us
We've long contended with isolation test results that aren't entirely
stable. Some test scripts insert long delays to try to force stable
results, which is not terribly desirable; but other erratic failure
modes remain, causing unrepeatable buildfarm failures. I've spent a
fair amount of time trying to solve this by improving the server-side
support code, without much success: that way is fundamentally unable
to cope with diffs that stem from chance ordering of arrival of
messages from different server processes.
We can improve matters on the client side, however, by annotating
the test scripts themselves to show the desired reporting order
of events that might occur in different orders. This patch adds
three types of annotations to deal with (a) test steps that might or
might not complete their waits before the isolationtester can see them
waiting; (b) test steps in different sessions that can legitimately
complete in either order; and (c) NOTIFY messages that might arrive
before or after the completion of a step in another session. We might
need more annotation types later, but this seems to be enough to deal
with the instabilities we've seen in the buildfarm. It also lets us
get rid of all the long delays that were previously used, cutting more
than a minute off the runtime of the isolation tests.
Back-patch to all supported branches, because the buildfarm
instabilities affect all the branches, and because it seems desirable
to keep isolationtester's capabilities the same across all branches
to simplify possible future back-patching of tests.
Discussion: https://postgr.es/m/327948.1623725828@sss.pgh.pa.us
This is useful for developers to find out if an isolation spec is
over-engineered or if it needs more work by warning at the end of a
test run if a step is not used, generating a failure with extra diffs.
While on it, clean up all the specs which include steps not used in any
permutations to simplify them.
This is a backpatch of 989d23b and 06fdc4e, as it is becoming useful to
make all the branches consistent for an upcoming patch that will improve
the output generated by isolationtester.
Author: Michael Paquier
Reviewed-by: Asim Praveen, Melanie Plageman
Discussion: https://postgr.es/m/20190819080820.GG18166@paquier.xyz
Discussion: https://postgr.es/m/794820.1623872009@sss.pgh.pa.us
Backpatch-through: 9.6
The ExecutorEnd hook is invoked in a context that could be quite
long-lived, not the executor's own per-query context as I think
we were sort of assuming. Thus, any cruft generated while producing
the EXPLAIN output could accumulate over multiple queries. This can
result in spectacular leakage if log_nested_statements is on, and
even without that I'm surprised nobody complained before.
To fix, just switch into the executor's context so that anything we
allocate will be released when standard_ExecutorEnd frees the executor
state. We might as well nuke the code's retail pfree of the explain
output string, too; that's laughably inadequate to the need.
Japin Li, per report from Jeff Janes. This bug is old, so
back-patch to all supported branches.
Discussion: https://postgr.es/m/CAMkU=1wCVtbeRn0s9gt12KwQ7PLXovbpM8eg25SYocKW3BT4hg@mail.gmail.com
When reporting connection errors, we might show a database name in the
message that's not the one we actually tried to connect to, if the
database was taken from libpq defaults instead of from user parameters.
Fix such error messages to use PQdb(), which reports the correct name.
(But, per commit 2930c05634, make sure not to try to print NULL.)
Apply to branches 9.5 through 13. Branch master has already been
changed differently by commit 58cd8dca3d.
Reported-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CA+TgmobssJ6rS22dspWnu-oDxXevGmhMD8VcRBjmj-b9UDqRjw@mail.gmail.com
By default VACUUM will skip pages that it can't immediately get
exclusive access to, which means that even activities as harmless
and unpredictable as checkpoint buffer writes might prevent a page
from being processed. Ordinarily this is no big deal, but we have
a small number of test cases that examine the results of VACUUM's
processing and therefore will fail if the page of interest is skipped.
This seems to be the explanation for some rare buildfarm failures.
To fix, add the DISABLE_PAGE_SKIPPING option to the VACUUM commands
in tests where this could be an issue.
In passing, remove a duplicated query in pageinspect/sql/page.sql.
Back-patch as necessary (some of these cases are as old as v10).
Discussion: https://postgr.es/m/413923.1611006484@sss.pgh.pa.us
In postgres_fdw, the cached connections to foreign servers will not be
closed until the local session exits if the user mappings or foreign servers
that those connections depend on are dropped. Those connections can be
leaked.
To fix that connection leak issue, after a change to a pg_foreign_server
or pg_user_mapping catalog entry, this commit makes postgres_fdw close
the connections depending on that entry immediately if current
transaction has not used those connections yet. Otherwise, mark those
connections as invalid and then close them at the end of current transaction,
since they cannot be closed in the midst of the transaction using them.
Closed connections will be remade at the next opportunity if necessary.
Back-patch to all supported branches.
Author: Bharath Rupireddy
Reviewed-by: Zhihong Yu, Zhijie Hou, Fujii Masao
Discussion: https://postgr.es/m/CALj2ACVNcGH_6qLY-4_tXz8JLvA+4yeBThRfxMz7Oxbk1aHcpQ@mail.gmail.com
The following routines are called within pgcrypto when handling digests
but there were no checks for failures:
- EVP_MD_CTX_size (can fail with -1 as of 3.0.0)
- EVP_MD_CTX_block_size (can fail with -1 as of 3.0.0)
- EVP_DigestInit_ex
- EVP_DigestUpdate
- EVP_DigestFinal_ex
A set of elog(ERROR) is added by this commit to detect such failures,
that should never happen except in the event of a processing failure
internal to OpenSSL.
Note that it would be possible to use ERR_reason_error_string() to get
more context about such errors, but these refer mainly to the internals
of OpenSSL, so it is not really obvious how useful that would be. This
is left out for simplicity.
Per report from Coverity. Thanks to Tom Lane for the discussion.
Backpatch-through: 9.5
Whether from size overflow in gistSplit or from secondary splits,
picksplit is (rarely) called with exactly two items to split.
Formerly, due to special-case handling of the last item, this would
lead to access to an uninitialized cache entry; prior to PG 13 this
might have been harmless or at worst led to an incorrect union datum,
but in 13 onwards it can cause a backend crash from using an
uninitialized pointer.
Repair by removing the special case, which was deemed not to have been
appropriate anyway. Backpatch all the way, because this bug has
existed since pg_trgm was added.
Per report on IRC from user "ftzdomino". Analysis and testing by me,
patch from Alexander Korotkov.
Discussion: https://postgr.es/m/87k0usfdxg.fsf@news-spur.riddles.org.uk
Introduce TimestampDifferenceMilliseconds() to simplify callers
that would rather have the difference in milliseconds, instead of
the select()-oriented seconds-and-microseconds format. This gets
rid of at least one integer division per call, and it eliminates
some apparently-easy-to-mess-up arithmetic.
Two of these call sites were in fact wrong:
* pg_prewarm's autoprewarm_main() forgot to multiply the seconds
by 1000, thus ending up with a delay 1000X shorter than intended.
That doesn't quite make it a busy-wait, but close.
* postgres_fdw's pgfdw_get_cleanup_result() thought it needed to compute
microseconds not milliseconds, thus ending up with a delay 1000X longer
than intended. Somebody along the way had noticed this problem but
misdiagnosed the cause, and imposed an ad-hoc 60-second limit rather
than fixing the units. This was relatively harmless in context, because
we don't care that much about exactly how long this delay is; still,
it's wrong.
There are a few more callers of TimestampDifference() that don't
have a direct need for seconds-and-microseconds, but can't use
TimestampDifferenceMilliseconds() either because they do need
microsecond precision or because they might possibly deal with
intervals long enough to overflow 32-bit milliseconds. It might be
worth inventing another API to improve that, but that seems outside
the scope of this patch; so those callers are untouched here.
Given the fact that we are fixing some bugs, and the likelihood
that future patches might want to back-patch code that uses this
new API, back-patch to all supported branches.
Alexey Kondratov and Tom Lane
Discussion: https://postgr.es/m/3b1c053a21c07c1ed5e00be3b2b855ef@postgrespro.ru
Specifically, this blocks DECLARE ... WITH HOLD and firing of deferred
triggers within index expressions and materialized view queries. An
attacker having permission to create non-temp objects in at least one
schema could execute arbitrary SQL functions under the identity of the
bootstrap superuser. One can work around the vulnerability by disabling
autovacuum and not manually running ANALYZE, CLUSTER, REINDEX, CREATE
INDEX, VACUUM FULL, or REFRESH MATERIALIZED VIEW. (Don't restore from
pg_dump, since it runs some of those commands.) Plain VACUUM (without
FULL) is safe, and all commands are fine when a trusted user owns the
target object. Performance may degrade quickly under this workaround,
however. Back-patch to 9.5 (all supported versions).
Reviewed by Robert Haas. Reported by Etienne Stalmans.
Security: CVE-2020-25695
When allocating a EVP context, it would have been possible to leak some
memory allocated directly by OpenSSL, that PostgreSQL lost track of if
the initialization of the context allocated failed. The cleanup can be
done with EVP_MD_CTX_destroy().
Note that EVP APIs exist since OpenSSL 0.9.7 and we have in the tree
equivalent implementations for older versions since ce9b75d (code
removed with 9b7cd59a as of 10~). However, in 9.5 and 9.6, the existing
code makes use of EVP_MD_CTX_destroy() and EVP_MD_CTX_create() without
an equivalent implementation when building the tree with OpenSSL 0.9.6
or older, meaning that this code is in reality broken with such versions
since it got introduced in e2838c5. As we have heard no complains about
that, it does not seem worth bothering with in 9.5 and 9.6, so I have
left that out for simplicity.
Author: Michael Paquier
Discussion: https://postgr.es/m/20201015072212.GC2305@paquier.xyz
Backpatch-through: 9.5
In theory, the second px_find_digest call in px_crypt_md5 could fail
even though the first one succeeded, since resource allocation is
required. Don't skip testing for a failure. (If one did happen,
the likely result would be a crash rather than clean recovery from
an OOM failure.)
The code's been like this all along, so back-patch to all supported
branches.
Daniel Gustafsson
Discussion: https://postgr.es/m/AA8D6FE9-4AB2-41B4-98CB-AE64BA668C03@yesql.se
Hostile objects located within the installation-time search_path could
capture references in an extension's installation or upgrade script.
If the extension is being installed with superuser privileges, this
opens the door to privilege escalation. While such hazards have existed
all along, their urgency increases with the v13 "trusted extensions"
feature, because that lets a non-superuser control the installation path
for a superuser-privileged script. Therefore, make a number of changes
to make such situations more secure:
* Tweak the construction of the installation-time search_path to ensure
that references to objects in pg_catalog can't be subverted; and
explicitly add pg_temp to the end of the path to prevent attacks using
temporary objects.
* Disable check_function_bodies within installation/upgrade scripts,
so that any security gaps in SQL-language or PL-language function bodies
cannot create a risk of unwanted installation-time code execution.
* Adjust lookup of type input/receive functions and join estimator
functions to complain if there are multiple candidate functions. This
prevents capture of references to functions whose signature is not the
first one checked; and it's arguably more user-friendly anyway.
* Modify various contrib upgrade scripts to ensure that catalog
modification queries are executed with secure search paths. (These
are in-place modifications with no extension version changes, since
it is the update process itself that is at issue, not the end result.)
Extensions that depend on other extensions cannot be made fully secure
by these methods alone; therefore, revert the "trusted" marking that
commit eb67623c9 applied to earthdistance and hstore_plperl, pending
some better solution to that set of issues.
Also add documentation around these issues, to help extension authors
write secure installation scripts.
Patch by me, following an observation by Andres Freund; thanks
to Noah Misch for review.
Security: CVE-2020-14350
These files were missed when commit a3ab7a707 added a new test query.
Understandable considering these files no longer exist in HEAD.
Per buildfarm member pademelon.
A compressed stream may end with an empty packet. In this case
decompression finishes before reading the empty packet and the
remaining stream packet causes a failure in reading the following
data. This commit makes sure to consume such extra data, avoiding a
failure when decompression the data. This corner case was reproducible
easily with a data length of 16kB, and existed since e94dd6a. A cheap
regression test is added to cover this case based on a random,
incompressible string.
The first attempt of this patch has allowed to find an older failure
within the compression logic of pgcrypto, fixed by b9b6105. This
involved SLES 15 with z390 where a custom flavor of libz gets used.
Bonus thanks to Mark Wong for providing access to the specific
environment.
Reported-by: Frank Gagnepain
Author: Kyotaro Horiguchi, Michael Paquier
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/16476-692ef7b84e5fb893@postgresql.org
Backpatch-through: 9.5
contrib/pgcrypto mishandled the case where deflate() does not consume
all of the offered input on the first try. It reset the next_in pointer
to the start of the input instead of leaving it alone, causing the wrong
data to be fed to the next deflate() call.
This has been broken since pgcrypto was committed. The reason for the
lack of complaints seems to be that it's fairly hard to get stock zlib
to not consume all the input, so long as the output buffer is big enough
(which it normally would be in pgcrypto's usage; AFAICT the input is
always going to be packetized into packets no larger than ZIP_OUT_BUF).
However, IBM's zlibNX implementation for AIX evidently will do it
in some cases.
I did not add a test case for this, because I couldn't find one that
would fail with stock zlib. When we put back the test case for
bug #16476, that will cover the zlibNX situation well enough.
While here, write deflate()'s second argument as Z_NO_FLUSH per its
API spec, instead of hard-wiring the value zero.
Per buildfarm results and subsequent investigation.
Discussion: https://postgr.es/m/16476-692ef7b84e5fb893@postgresql.org
This reverts commit 9e10898, after finding out that buildfarm members
running SLES 15 on z390 complain on the compression and decompression
logic of the new test: pipistrelles, barbthroat and steamerduck.
Those hosts are visibly using hardware-specific changes to improve zlib
performance, requiring more investigation.
Thanks to Tom Lane for the discussion.
Discussion: https://postgr.es/m/20200722093749.GA2564@paquier.xyz
Backpatch-through: 9.5
A compressed stream may end with an empty packet, and PGP decompression
finished before reading this empty packet in the remaining stream. This
caused a failure in pgcrypto, handling this case as corrupted data.
This commit makes sure to consume such extra data, avoiding a failure
when decompression the entire stream. This corner case was reproducible
with a data length of 16kB, and existed since its introduction in
e94dd6a. A cheap regression test is added to cover this case.
Thanks to Jeff Janes for the extra investigation.
Reported-by: Frank Gagnepain
Author: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/16476-692ef7b84e5fb893@postgresql.org
Backpatch-through: 9.5
Writing a trailing semicolon in a macro is almost never the right thing,
because you almost always want to write a semicolon after each macro
call instead. (Even if there was some reason to prefer not to, pgindent
would probably make a hash of code formatted that way; so within PG the
rule should basically be "don't do it".) Thus, if we have a semi inside
the macro, the compiler sees "something;;". Much of the time the extra
empty statement is harmless, but it could lead to mysterious syntax
errors at call sites. In perhaps an overabundance of neatnik-ism, let's
run around and get rid of the excess semicolons whereever possible.
The only thing worse than a mysterious syntax error is a mysterious
syntax error that only happens in the back branches; therefore,
backpatch these changes where relevant, which is most of them because
most of these mistakes are old. (The lack of reported problems shows
that this is largely a hypothetical issue, but still, it could bite
us in some future patch.)
John Naylor and Tom Lane
Discussion: https://postgr.es/m/CACPNZCs0qWTqJ2QUSGJ07B7uvAvzMb-KbG2q+oo+J3tsWN5cqw@mail.gmail.com
fixup_whole_row_references() did the wrong thing with a dropped column,
resulting in a commit-time warning about a cache reference leak.
I (tgl) added a test case exercising this, but back-patched the test
only as far as v10; the patch didn't apply cleanly to 9.6 and it
didn't seem worth the trouble to adapt it. The bug is pretty old
though, so apply the code change all the way back.
Michael Luo, with cosmetic improvements by me
Discussion: https://postgr.es/m/BYAPR08MB5606D1453D7F50E2AF4D2FD29AD80@BYAPR08MB5606.namprd08.prod.outlook.com
contrib/lo's lo_manage() thought it could use
trigdata->tg_trigger->tgname in its error message about
not being called as a trigger. That naturally led to a core dump.
unique_key_recheck() figured it could Assert that fcinfo->context
is a TriggerData node in advance of having checked that it's
being called as a trigger. That's harmless in production builds,
and perhaps not that easy to reach in any case, but it's logically
wrong.
The first of these per bug #16340 from William Crowell;
the second from manual inspection of other CALLED_AS_TRIGGER
call sites.
Back-patch the lo.c change to all supported branches, the
other to v10 where the thinko crept in.
Discussion: https://postgr.es/m/16340-591c7449dc7c8c47@postgresql.org
Experimentation shows that it's not hard at all to drive the
old implementation of "ltree ~ lquery" match to stack overflow,
so throw in a check_stack_depth() call, as I just did in HEAD.
I wasn't able to make it take a long time, because all the
pathological cases I tried hit stack overflow first; but
I bet there are some others that do take a long time, so add
CHECK_FOR_INTERRUPTS() too.
Discussion: https://postgr.es/m/CAP_rww=waX2Oo6q+MbMSiZ9ktdj6eaJj0cQzNu=Ry2cCDij5fw@mail.gmail.com
These uint16 fields could be overflowed by excessively long input,
producing strange results. Complain for invalid input.
Likewise check for out-of-range values of the repeat counts in lquery.
(We don't try too hard on that one, notably not bothering to detect
if atoi's result has overflowed.)
Also detect length overflow in ltree_concat.
In passing, be more consistent about whether "syntax error" messages
include the type name. Also, clarify the documentation about what
the size limit is.
This has been broken for a long time, so back-patch to all supported
branches.
Nikita Glukhov, reviewed by Benjie Gillam and Tomas Vondra
Discussion: https://postgr.es/m/CAP_rww=waX2Oo6q+MbMSiZ9ktdj6eaJj0cQzNu=Ry2cCDij5fw@mail.gmail.com
This extends the fixes made in commit 085b6b667 to other SRFs with the
same bug, namely pg_logdir_ls(), pgrowlocks(), pg_timezone_names(),
pg_ls_dir(), and pg_tablespace_databases().
Also adjust various comments and documentation to warn against
expecting to clean up resources during a ValuePerCall SRF's final
call.
Back-patch to all supported branches, since these functions were
all born broken.
Justin Pryzby, with cosmetic tweaks by me
Discussion: https://postgr.es/m/20200308173103.GC1357@telsasoft.com
Manifested as
ERROR: subtransaction logged without previous top-level txn record
this check forbids legit behaviours like
- First xl_xact_assignment record is beyond reading, i.e. earlier
restart_lsn.
- After restart_lsn there is some change of a subxact.
- After that, there is second xl_xact_assignment (for another subxact)
revealing the relationship between top and first subxact.
Such a transaction won't be streamed anyway because we hadn't seen it in
full. Saying for sure whether xact of some record encountered after
the snapshot was deserialized can be streamed or not requires to know
whether it wrote something before deserialization point --if yes, it
hasn't been seen in full and can't be decoded. Snapshot doesn't have such
info, so there is no easy way to relax the check.
Reported-by: Hsu, John
Diagnosed-by: Arseny Sher
Author: Arseny Sher, Amit Kapila
Reviewed-by: Amit Kapila, Dilip Kumar
Backpatch-through: 9.5
Discussion: https://postgr.es/m/AB5978B2-1772-4FEE-A245-74C91704ECB0@amazon.com
In a statement like "UPDATE remote_tab SET (x,y) = (SELECT ...)",
we'd conclude that the statement could be directly executed remotely,
because the sub-SELECT is in a resjunk tlist item that's not examined
for shippability. Currently that ends up crashing if the sub-SELECT
contains any remote Vars. Prevent the crash by deeming MULTIEXEC
Params to be unshippable.
This is a bit of a brute-force solution, since if the sub-SELECT
*doesn't* contain any remote Vars, the current execution technology
would work; but that's not a terribly common use-case for this syntax,
I think. In any case, we generally don't try to ship sub-SELECTs, so
it won't surprise anybody that this doesn't end up as a remote direct
update. I'd be inclined to see if that general limitation can be fixed
before worrying about this case further.
Per report from Lukáš Sobotka.
Back-patch to 9.6. 9.5 had MULTIEXPR, but we didn't try to perform
remote direct updates then, so the case didn't arise anyway.
Discussion: https://postgr.es/m/CAJif3k+iA_ekBB5Zw2hDBaE1wtiQa4LH4_JUXrrMGwTrH0J01Q@mail.gmail.com
While building a hash map of categories in load_categories_hash,
resulting category names have not thus far been checked to ensure
they are not null. Prior to pg12 null category names worked to the
extent that they did not crash on some platforms. This is because
those system libraries have an snprintf which can deal with being
passed a null pointer argument for a string. But even in those cases
null categories did nothing useful. And on some platforms it crashed.
As of pg12, our own version of snprintf gets called, and it does
not deal with null pointer arguments at all, and crashes consistently.
Fix that by disallowing null categories. They never worked usefully,
and no one has ever asked for them to work previously. Back-patch to
all supported branches.
Reported-By: Ireneusz Pluta
Discussion: https://postgr.es/m/16176-7489719b05e4303c@postgresql.org
We realized years ago that it's better for libpq to accept all
connection parameters syntactically, even if some are ignored or
restricted due to lack of the feature in a particular build.
However, that lesson from the SSL support was for some reason never
applied to the GSSAPI support. This is causing various buildfarm
members to have problems with a test case added by commit 6136e94dc,
and it's just a bad idea from a user-experience standpoint anyway,
so fix it.
While at it, fix some places where parameter-related infrastructure
was added with the aid of a dartboard, or perhaps with the aid of
the anti-pattern "add new stuff at the end". It should be safe
to rearrange the contents of struct pg_conn even in released
branches, since that's private to libpq (and we'd have to move
some fields in some builds to fix this, anyway).
Back-patch to all supported branches.
Discussion: https://postgr.es/m/11297.1576868677@sss.pgh.pa.us
The dict_int text search dictionary template accepts maxlen parameter,
which is then used to cap the length of input strings. The value was
not properly checked, and the code simply does
txt[d->maxlen] = '\0';
to insert a terminator, leading to segfaults with negative values.
This commit simply rejects values less than 1. The issue was there since
dct_int was introduced in 9.3, so backpatch all the way back to 9.4
which is the oldest supported version.
Reported-by: cili
Discussion: https://postgr.es/m/16144-a36a5bef7657047d@postgresql.org
Backpatch-through: 9.4
Currently, postgres_fdw does not support preparing a remote transaction
for two-phase commit even in the case where the remote transaction is
read-only, but the old error message appeared to imply that that was not
supported only if the remote transaction modified remote tables. Change
the message so as to include the case where the remote transaction is
read-only.
Also fix a comment above the message.
Also add a note about the lack of supporting PREPARE TRANSACTION to the
postgres_fdw documentation.
Reported-by: Gilles Darold
Author: Gilles Darold and Etsuro Fujita
Reviewed-by: Michael Paquier and Kyotaro Horiguchi
Backpatch-through: 9.4
Discussion: https://postgr.es/m/08600ed3-3084-be70-65ba-279ab19618a5%40darold.net
In-core relation options can use a custom lock mode since 47167b7, that
has lowered the lock available for some autovacuum parameters. However
it forgot to consider custom relation options. This causes failures
with ALTER TABLE SET when changing a custom relation option, as its lock
is not defined. The existing APIs to define a custom reloption does not
allow to define a custom lock mode, so enforce its initialization to
AccessExclusiveMode which should be safe enough in all cases. An
upcoming patch will extend the existing APIs to allow a custom lock mode
to be defined.
The problem can be reproduced with bloom indexes, so add a test there.
Reported-by: Nikolay Sharplov
Analyzed-by: Thomas Munro, Michael Paquier
Author: Michael Paquier
Reviewed-by: Kuntal Ghosh
Discussion: https://postgr.es/m/20190920013831.GD1844@paquier.xyz
Backpatch-through: 9.6
contrib/amcheck failed to consider the possibility that unlogged
relations will not have any main relation fork files when running in hot
standby mode. This led to low-level "can't happen" errors that complain
about the absence of a relfilenode file.
To fix, simply skip verification of unlogged index relations during
recovery. In passing, add a direct check for the presence of a main
fork just before verification proper begins, so that we cleanly verify
the presence of the main relation fork file.
Author: Andrey Borodin, Peter Geoghegan
Reported-By: Andrey Borodin
Diagnosed-By: Andrey Borodin
Discussion: https://postgr.es/m/DA9B33AC-53CB-4643-96D4-7A0BBC037FA1@yandex-team.ru
Backpatch: 10-, where amcheck was introduced.
contrib/intarray considers "arraycol <@ constant-array" to be indexable,
but its GiST opclass code fails to reliably find index entries for empty
array values (which of course should trivially match such queries).
This is because the test condition to see whether we should descend
through a non-leaf node is wrong.
Unfortunately, empty array entries could be anywhere in the index,
as these index opclasses are currently designed. So there's no way
to fix this except by lobotomizing <@ indexscans to scan the whole
index ... which is what this patch does. That's pretty unfortunate:
the performance is now actually worse than a seqscan, in most cases.
We'd be better off to remove <@ from the GiST opclasses entirely,
and perhaps a future non-back-patchable patch will do so.
In the meantime, applications whose performance is adversely impacted
have a couple of options. They could switch to a GIN index, which
doesn't have this bug, or they could replace "arraycol <@ constant-array"
with "arraycol <@ constant-array AND arraycol && constant-array".
That will provide about the same performance as before, and it will find
all non-empty subsets of the given constant-array, which is all that
could reliably be expected of the query before.
While at it, add some more regression test cases to improve code
coverage of contrib/intarray.
In passing, adjust resize_intArrayType so that when it's returning an
empty array, it uses construct_empty_array for that rather than
cowboy hacking on the input array. While the hack produces an array
that looks valid for most purposes, it isn't bitwise equal to empty
arrays produced by other code paths, which could have subtle odd
effects. I don't think this code path is performance-critical
enough to justify such shortcuts. (Back-patch this part only as far
as v11; before commit 01783ac36 we were not careful about this in
other intarray code paths either.)
Back-patch the <@ fixes to all supported versions, since this was
broken from day one.
Patch by me; thanks to Alexander Korotkov for review.
Discussion: https://postgr.es/m/458.1565114141@sss.pgh.pa.us
When piling up loading of modules using check_password_hook_type,
loading passwordcheck would remove any trace of a previously-loaded
hook. Unloading the module would also cause previous hooks to be
entirely gone.
Reported-by: Rafael Castro
Author: Michael Paquier
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/15932-78f48f9ef166778c@postgresql.org
Backpatch-through: 9.4
As of Fedora 30, it seems that the system-provided macros for setting
up user privileges in SELinux policies don't grant the ability to read
/etc/passwd, as they formerly did. This restriction breaks psql
(which tries to use getpwuid() to obtain the user name it's running
under) and thereby the contrib/sepgsql regression test. Add explicit
specifications that we need the right to read /etc/passwd.
Mike Palmiotto, per a report from me. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/23856.1563381159@sss.pgh.pa.us
Previously, in postgresPlanForeignModify, we planned an UPDATE operation
on a foreign table so that we transmit only columns that were explicitly
targets of the UPDATE, so as to avoid unnecessary data transmission, but
if there were BEFORE ROW UPDATE triggers on the foreign table, those
triggers might change values for non-target columns, in which case we
would miss sending changed values for those columns. Prevent optimizing
away transmitting all columns if there are BEFORE ROW UPDATE triggers on
the foreign table.
This is an oversight in commit 7cbe57c34 which added triggers on foreign
tables, so apply the patch all the way back to 9.4 where that came in.
Author: Shohei Mochizuki
Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/201905270152.x4R1q3qi014550@toshiba.co.jp
Descriptions of pg_trgm GUC options have % replaced with %% like it was
a printf-like format. But that's not needed since they are just plain strings.
This commit fixed that. Backpatch to last supported version since this error
present from the beginning.
Reported-by: Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoAgPKODUsu9gqUFiNqEOAqedStxJ-a0sapsJXWWAVp%3Dxg%40mail.gmail.com
Backpatch-through: 9.4
It's harmless to call PQfreemem() with a NULL argument, so the only
consequence was that if allocating 'schema' failed, but allocating 'table'
or 'field' succeeded, we would leak a bit of memory. That's highly
unlikely to happen, so this is just academical, but let's get it right.
Per bug #15838 from Timur Birsh. Backpatch back to 9.5, where the
PQfreemem() calls were introduced.
Discussion: https://www.postgresql.org/message-id/15838-3221652c72c5e69d@postgresql.org