Somebody messed up a refactoring here. As it stood, we'd check pg_ctl's
--version output twice for each cluster. Worse, the first check for the
new cluster's version happened before we'd done any validate_exec checks
there, breaking the check ordering the code intended.
A. Akenteva
Discussion: https://postgr.es/m/f9266a85d918a3cf3a386b5148aee666@postgrespro.ru
Upon further review, our Bonjour code doesn't actually work with the
Avahi not-too-compatible compatibility library. While you can get it
to work on non-macOS platforms if you link to Apple's own mDNSResponder
code, there don't seem to be many people who care about that. Leaving in
the AC_SEARCH_LIBS call seems more likely to encourage people to build
broken configurations than to do anything very useful.
Hence, remove the AC_SEARCH_LIBS call and put in a warning comment instead.
Discussion: https://postgr.es/m/2D8331C5-D64F-44C1-8717-63EDC6EAF7EB@brightforge.com
On macOS the relevant functions require no special library, but elsewhere
we need to pull in libdns_sd.
Back-patch to supported branches. No docs change since the docs do not
suggest that this is a Mac-only feature.
Luke Lonergan
Discussion: https://postgr.es/m/2D8331C5-D64F-44C1-8717-63EDC6EAF7EB@brightforge.com
The point of having separate ResourceOwnerEnlargeFoo and
ResourceOwnerRememberFoo functions is so that resource allocation
can happen in between. Doing it in some other order is just wrong.
OpenTemporaryFile() did open(), enlarge, remember, which would leak the
open file if the enlarge step ran out of memory. Because fd.c has its own
layer of resource-remembering, the consequences look like they'd be limited
to an intratransaction FD leak, but it's still not good.
IncrBufferRefCount() did enlarge, remember, incr-refcount, which would blow
up if the incr-refcount step ever failed. It was safe enough when written,
but since the introduction of PrivateRefCountHash, I think the assumption
that no error could happen there is pretty shaky.
The odds of real problems from either bug are probably small, but still,
back-patch to supported branches.
Thomas Munro and Tom Lane, per a comment from Andres Freund
isdigit(), isspace(), etc are likely to give surprising results if passed a
signed char. We should always cast the argument to unsigned char to avoid
that. Error in commit 63d6b97fd, found by buildfarm member gaur.
Back-patch to 9.3, like that commit.
configure computed PG_VERSION_NUM incorrectly. (Coulda sworn I tested
that logic back when, but it had an obvious thinko.)
pg_upgrade had not been taught about the new dispensation with just
one part in the major version number.
Both things accidentally failed to fail with 10.0, but with 10.1 we
got the wrong results.
Per buildfarm.
json{b}_populate_recordset() used the tuple descriptor created from the
query-level AS clause without worrying about whether it matched the actual
input record type. If it didn't, that would usually result in a crash,
though disclosure of server memory contents seems possible as well, for a
skilled attacker capable of issuing crafted SQL commands. Instead, use
the query-supplied descriptor only when there is no input tuple to look at,
and otherwise get a tuple descriptor based on the input tuple's own type
marking. The core code will detect any type mismatch in the latter case.
Michael Paquier and Tom Lane, per a report from David Rowley.
Back-patch to 9.3 where this functionality was introduced.
Security: CVE-2017-15098
By default, $PGUSER has permission to unlink $PGLOG. If $PGUSER
replaces $PGLOG with a symbolic link, the server will corrupt the
link-targeted file by appending log messages. Since these scripts open
$PGLOG as root, the attack works regardless of target file ownership.
"make install" does not install these scripts anywhere. Users having
manually installed them in the past should repeat that process to
acquire this fix. Most script users have $PGLOG writable to root only,
located in $PGDATA. Just before updating one of these scripts, such
users should rename $PGLOG to $PGLOG.old. The script will then recreate
$PGLOG with proper ownership.
Reviewed by Peter Eisentraut. Reported by Antoine Scemama.
Security: CVE-2017-12172
The update path of an INSERT ... ON CONFLICT DO UPDATE requires SELECT
permission on the columns of the arbiter index, but it failed to check
for that in the case of an arbiter specified by constraint name.
In addition, for a table with row level security enabled, it failed to
check updated rows against the table's SELECT policies when the update
path was taken (regardless of how the arbiter index was specified).
Backpatch to 9.5 where ON CONFLICT DO UPDATE and RLS were introduced.
Security: CVE-2017-15099
Makefile.global assigns this prerequisite to every target named "check",
but similar targets must mention it explicitly. Affected targets
failed, tested $PATH binaries, or tested a stale temporary installation.
The src/test/modules examples worked properly when called as "make -C
src/test/modules/$FOO check", but "make -j" allowed the test to start
before the temporary installation was in place. Back-patch to 9.5,
where commit dcae5faccab64776376d354decda0017c648bb53 introduced the
shared temp-install.
In the v10 branch, also back-patch the effects of 1ff01b390 and c29c57890
on these files, to reduce future maintenance issues. (I'd do it further
back, except that the 9.X branches differ anyway due to xlog-to-wal
link tag renaming.)
The previous commit contained a thinko that made a single-range
summarization request process from there to end of table. Fix by
setting the correct end range point. Per buildfarm.
This makes the produced HTML anchors upper case, making it backward
compatible with the previous (9.6) build system.
Reported-by: Thomas Kellerer <spam_eater@gmx.net>
When a publisher table has fewer columns than a subscriber, the update
of a row on the publisher should result in updating of only the columns
in common. The previous coding mistakenly reset the values of
additional columns on the subscriber to NULL because it failed to skip
updates of columns not found in the attribute map.
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
If a process is extending a table concurrently with some BRIN
summarization process, it is possible for the latter to miss pages added
by the former because the number of pages is computed ahead of time.
Fix by determining a fresh relation size after inserting the placeholder
tuple: any process that further extends the table concurrently will
update the placeholder tuple, while previous pages will be processed by
the heap scan.
Reported-by: Tomas Vondra
Reviewed-by: Tom Lane
Author: Álvaro Herrera
Discussion: https://postgr.es/m/083d996a-4a8a-0e13-800a-851dd09ad8cc@2ndquadrant.com
Backpatch-to: 9.5
In some cases the BRIN code releases lock on an index page, and later
re-acquires lock and tries to check that the tuple it was working on is
still there. That check was a couple bricks shy of a load. It didn't
consider that the page might have turned into a "revmap" page. (The
samepage code path doesn't call brin_getinsertbuffer(), so it isn't
protected by the checks for revmap status there.) It also didn't check
whether the tuple offset was now off the end of the linepointer array.
Since commit 24992c6db the latter case is pretty common, but at least
in principle it could have occurred before that. The net result is
that concurrent updates of a BRIN index could fail with errors like
"invalid index offnum" or "inconsistent range map".
Per report from Tomas Vondra. Back-patch to 9.5, since this code is
substantially the same in all versions containing BRIN.
Discussion: https://postgr.es/m/10d2b9f9-f427-03b8-8ad9-6af4ecacbee9@2ndquadrant.com
It turns out we misdiagnosed what the real problem was. Revert the
previous changes, because they may have worse consequences going
forward. A better fix is forthcoming.
The simplistic test case is kept, though disabled.
Discussion: https://postgr.es/m/20171102112019.33wb7g5wp4zpjelu@alap3.anarazel.de
A candidate path needs to be canonicalized before being checked against
the mappings, because the mappings are also canonicalized. This is
especially relevant on Windows
Reported-by: nb <nbedxp@gmail.com>
Author: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Ashutosh Sharma <ashu.coek88@gmail.com>
Queries running with some non-pg_catalog schema frontmost in their search
path need to be careful to schema-qualify type names that should be sought
in pg_catalog. Vitaly Burovoy reported an oversight of this sort in
pg_dump's dumpSequence, and grepping detected another one in psql's
describeOneTableDetails, both introduced by sequence-related changes in
v10. In pg_dump, we can fix things by removing the cast altogether, since
it doesn't really matter what data types are reported for these query
result columns. Likewise in psql, the query seemed to be working unduly
hard to get a result that's guaranteed to be exactly 'bigint'.
I also changed a couple of occurrences of "::char" similarly. These are
not bugs, since "char" is a typename keyword and not subject to search_path
rules, but it seems better to use uniform style.
Vitaly Burovoy and Tom Lane
Discussion: https://postgr.es/m/CAKOSWN=ds66zLw2SqkLTM8wbXFgDbc_OdkmT3dJfPT2mE5kipA@mail.gmail.com
The change made by commit 906bfcad7 means that if you're writing
a parenthesized column list in UPDATE ... SET, but that column list
is only one column, you now need to write ROW(expression) on the
righthand side, not just a parenthesized expression. This was an
intentional change for spec compatibility and potential future
expansion of the possibilities for the RHS, but I'd neglected to
document it as a compatibility issue, figuring that hardly anyone
would bother with parenthesized syntax for a single target column.
I was wrong, as shown by questions from Justin Pryzby, Adam Brusselback,
and others. Move the release note item into the compatibility section
and point out the behavior change for a single target column.
Discussion: https://postgr.es/m/CAMjNa7cDLzPcs0xnRpkvqmJ6Vb6G3EH8CYGp9ZBjXdpFfTz6dg@mail.gmail.com
In autovacuum's "work item" processing, a few strings were allocated in
the current transaction's memory context, which goes away during error
handling; if an error happened during execution of the work item, the
pfree() calls to clean up afterwards would try to release already-released
memory, possibly leading to a crash. In branch master, this was already
fixed by commit 335f3d04e4c8, so backpatch that to REL_10_STABLE to fix
the problem there too.
As a secondary problem, verify that the autovacuum worker is connected
to the right database for each work item; otherwise some items would be
discarded by workers in other databases.
Reported-by: Justin Pryzby
Discussion: https://postgr.es/m/20171014035732.GB31726@telsasoft.com
This was always intended to work, but due to an oversight in
max_parallel_hazard_walker, it didn't. In testing, we missed the
fact that it was only working for custom plans, where the parameter
value has been substituted for the parameter itself early enough
that everything worked. In a generic plan, the Param node survives
and must be treated as parallel-safe. SerializeParamList provides
for the transmission of parameter values to workers.
Amit Kapila with help from Kuntal Ghosh. Some changes by me.
Discussion: http://postgr.es/m/CAA4eK1+_BuZrmVCeua5Eqnm4Co9DAXdM5HPAOE2J19ePbR912Q@mail.gmail.com
Without this fix, dropping a role can sometimes result in parallel
query failures in sessions that have used "SET ROLE" to assume the
dropped role, even if that setting isn't active any more.
Report by Pavan Deolasee. Patch by Amit Kapila, reviewed by me.
Discussion: http://postgr.es/m/CABOikdOomRcZsLsLK+Z+qENM1zxyaWnAvFh3MJZzZnnKiF+REg@mail.gmail.com
Commit d5b760ecb wasn't quite right, on second thought: if the
caller didn't ask for column names then it would happily emit
more Vars than if the caller did ask for column names. This
is surely not a good idea. Advance the aliasp_item whether or
not we're preparing a colnames list.
expandRTE() supposed that an RTE_SUBQUERY subquery must have exactly
as many non-junk tlist items as the RTE has column aliases for it.
This was true at the time the code was written, and is still true so
far as parse analysis is concerned --- but when the function is used
during planning, the subquery might have appeared through insertion
of a view that now has more columns than it did when the outer query
was parsed. This results in a core dump if, for instance, we have
to expand a whole-row Var that references the subquery.
To avoid crashing, we can either stop expanding the RTE when we run
out of aliases, or invent new aliases for the added columns. While
the latter might be more useful, the former is consistent with what
expandRTE() does for composite-returning functions in the RTE_FUNCTION
case, so it seems like we'd better do it that way.
Per bug #14876 from Samuel Horwitz. This has been busted since commit
ff1ea2173 allowed views to acquire more columns, so back-patch to all
supported branches.
Discussion: https://postgr.es/m/20171026184035.1471.82810@wrigleys.postgresql.org
On closer investigation, commits f3ea3e3e8 et al were a few bricks
shy of a load. What we need is not so much to lock down the result
type of a FieldSelect, as to lock down the existence of the column
it's trying to extract. Otherwise, we can break it by dropping that
column. The dependency on the result type is then held indirectly
through the column, and doesn't need to be recorded explicitly.
Out of paranoia, I left in the code to record a dependency on the
result type, but it's used only if we can't identify the pg_class OID
for the column. That shouldn't ever happen right now, AFAICS, but
it seems possible that in future the input node could be marked as
being of type RECORD rather than some specific composite type.
Likewise for FieldStore.
Like the previous patch, back-patch to all supported branches.
Discussion: https://postgr.es/m/22571.1509064146@sss.pgh.pa.us
If we try to run a parallel plan in serial mode because, for example,
it's going to be scanned via a cursor, but for some reason we're
already in parallel mode (for example because an outer query is
running in parallel), we'd incorrectly try to launch workers.
Fix by adding a flag to the EState, so that we can be certain that
ExecutePlan() and ExecGather()/ExecGatherMerge() will have the same
idea about whether we are executing serially or in parallel.
Report and fix by Amit Kapila with help from Kuntal Ghosh. A few
tweaks by me.
Discussion: http://postgr.es/m/CAA4eK1+_BuZrmVCeua5Eqnm4Co9DAXdM5HPAOE2J19ePbR912Q@mail.gmail.com
Previously, we skipped using search_indexed_tlist_for_sortgroupref()
if the tlist expression being sought in the child plan node was merely
a Var. This is purely an optimization, based on the theory that
search_indexed_tlist_for_var() is faster, and one copy of a Var should
be as good as another. However, the GROUPING SETS patch broke the
latter assumption: grouping columns containing the "same" Var can
sometimes have different outputs, as shown in the test case added here.
So do it the hard way whenever a ressortgroupref marking exists.
(If this seems like a bottleneck, we could imagine building a tlist index
data structure for ressortgroupref values, as we do for Vars. But I'll
let that idea go until there's some evidence it's worthwhile.)
Back-patch to 9.6. The problem also exists in 9.5 where GROUPING SETS
came in, but this patch is insufficient to resolve the problem in 9.5:
there is some obscure dependency on the upper-planner-pathification
work that happened in 9.6. Given that this is such a weird corner case,
and no end users have complained about it, it doesn't seem worth the work
to develop a fix for 9.5.
Patch by me, per a report from Heikki Linnakangas. (This does not fix
Heikki's original complaint, just the follow-on one.)
Discussion: https://postgr.es/m/aefc657e-edb2-64d5-6df1-a0828f6e9104@iki.fi
There have been numerous buildfarm failures but the diagnostic is
currently silent about the reason for failure to open the file. Let's
see if we can get to the bottom of it.
Backpatch to all live branches.
Some people like to run libpq-using applications in environments where
there's no home directory. We've broken that scenario before (cf commits
5b4067798 and bd58d9d88), and commit ba005f193 broke it again, by making
it a hard error if we fail to get the home directory name while looking
for ~/.pgpass. The previous precedent is that if we can't get the home
directory name, we should just silently act as though the file we hoped
to find there doesn't exist. Rearrange the new code to honor that.
Looking around, the service-file code added by commit 41a4e4595 had the
same disease. Apparently, that escaped notice because it only runs when
a service name has been specified, which I guess the people who use this
scenario don't do. Nonetheless, it's wrong too, so fix that case as well.
Add a comment about this policy to pqGetHomeDirectory, in the probably
vain hope of forestalling the same error in future. And upgrade the
rather miserable commenting in parseServiceInfo, too.
In passing, also back off parseServiceInfo's assumption that only ENOENT
is an ignorable error from stat() when checking a service file. We would
need to ignore at least ENOTDIR as well (cf 5b4067798), and seeing that
the far-better-tested code for ~/.pgpass treats all stat() failures alike,
I think this code ought to as well.
Per bug #14872 from Dan Watson. Back-patch the .pgpass change to v10
where ba005f193 came in. The service-file bugs are far older, so
back-patch the other changes to all supported branches.
Discussion: https://postgr.es/m/20171025200457.1471.34504@wrigleys.postgresql.org
json_build_object and json_build_array and the jsonb equivalents did not
correctly process explicit VARIADIC arguments. They are modified to use
the new extract_variadic_args() utility function which abstracts away
the details of the call method.
Michael Paquier, reviewed by Tom Lane and Dmitry Dolgov.
Backpatch to 9.5 for the jsonb fixes and 9.4 for the json fixes, as
that's where they originated.