When there is an identifiable REPLICA IDENTITY index on the target table,
heap_update leaks the id_attrs bitmapset. That's not many bytes, but it
adds up over enough rows, since the code typically runs in a query-lifespan
context. Bug introduced in commit e55704d8b, which did a rather poor job
of cloning the existing use-pattern for RelationGetIndexAttrBitmap().
Per bug #14293 from Zhou Digoal. Back-patch to 9.4 where the bug was
introduced.
Report: <20160824114320.15676.45171@wrigleys.postgresql.org>
ExecReScanAgg's check for whether it could re-use a previously calculated
hashtable neglected the possibility that the Agg node might reference
PARAM_EXEC Params that are not referenced by its input plan node. That's
okay if the Params are in upper tlist or qual expressions; but if one
appears in aggregate input expressions, then the hashtable contents need
to be recomputed when the Param's value changes.
To avoid unnecessary performance degradation in the case of a Param that
isn't within an aggregate input, add logic to the planner to determine
which Params are within aggregate inputs. This requires a new field in
struct Agg, but fortunately we never write plans to disk, so this isn't
an initdb-forcing change.
Per report from Jeevan Chalke. This has been broken since forever,
so back-patch to all supported branches.
Andrew Gierth, with minor adjustments by me
Report: <CAM2+6=VY8ykfLT5Q8vb9B6EbeBk-NGuLbT6seaQ+Fq4zXvrDcA@mail.gmail.com>
Clobbering errno during cleanup after an error is an oft-repeated, easy
to make mistake. Deal with it here as everywhere else, by saving it
aside and restoring after cleanup, before ereport'ing.
In passing, add a missing errcode declaration in another ereport() call
in the same file, which I noticed while skimming the file looking for
similar problems.
Backpatch to 9.4, where this code was introduced.
This adds a couple of new timezones that are present in the newer
versions of Windows. It also updates comments to reference UTC rather
than GMT, as this change has been made in Windows.
Michael Paquier
Slot creation did not clear all fields upon creation. After start the
memory is zeroed, but when a physical replication slot was created in
the shared memory of a previously existing logical slot, catalog_xmin
would not be cleared. That in turn would prevent vacuum from doing its
duties.
To fix initialize all the fields. To make similar future bugs less
likely, zero all of ReplicationSlotPersistentData, and re-order the
rest of the initialization to be in struct member order.
Analysis: Andrew Gierth
Reported-By: md@chewy.com
Author: Michael Paquier
Discussion: <20160705173502.1398.70934@wrigleys.postgresql.org>
Backpatch: 9.4, where replication slots were introduced
As implemented, -e ran an EXPLAIN but then discarded the output, which
certainly seems pointless. Make it print to stdout instead. It's been
like that forever, so back-patch to all supported branches.
Daniel Gustafsson, reviewed by Andreas Scherbaum
Patch: <B97BDCB7-A3B3-4734-90B5-EDD586941629@yesql.se>
This is somewhat cosmetic, since as long as you know what you are looking
at, "10.0" is a serviceable substitute for "10". But there is a potential
for confusion between version numbers with minor numbers and those without
--- we don't want people asking "why is psql saying 10.0 when my server is
10.2". Therefore, back-patch as far as practical, which turns out to be
9.3. I could have redone the patch to use fprintf(stderr) in place of
psql_error(), but it seems more work than is warranted for branches that
will be EOL or nearly so by the time v10 comes out.
Although only psql seems to contain any code that needs this, I chose
to put the support function into fe_utils, since it seems likely we'll
need it in other client programs in future. (In 9.3-9.5, use dumputils.c,
the predecessor of fe_utils/string_utils.c.)
In HEAD, also fix the backend code that whines about loadable-library
version mismatch. I don't see much need to back-patch that.
NUMERIC_MAX_PRECISION is a purely arbitrary constraint on the precision
and scale you can write in a numeric typmod. It might once have had
something to do with the allowed range of a typmod-less numeric value,
but at least since 9.1 we've allowed, and documented that we allowed,
any value that would physically fit in the numeric storage format;
which is something over 100000 decimal digits, not 1000.
Hence, get rid of numeric_in()'s use of NUMERIC_MAX_PRECISION as a limit
on the allowed range of the exponent in scientific-format input. That was
especially silly in view of the fact that you can enter larger numbers as
long as you don't use 'e' to do it. Just constrain the value enough to
avoid localized overflow, and let make_result be the final arbiter of what
is too large. Likewise adjust ecpg's equivalent of this code.
Also get rid of numeric_recv()'s use of NUMERIC_MAX_PRECISION to limit the
number of base-NBASE digits it would accept. That created a dump/restore
hazard for binary COPY without doing anything useful; the wire-format
limit on number of digits (65535) is about as tight as we would want.
In HEAD, also get rid of pg_size_bytes()'s unnecessary intimacy with what
the numeric range limit is. That code doesn't exist in the back branches.
Per gripe from Aravind Kumar. Back-patch to all supported branches,
since they all contain the documentation claim about allowed range of
NUMERIC (cf commit cabf5d84b).
Discussion: <2895.1471195721@sss.pgh.pa.us>
EXPLAIN (ANALYZE, TIMING OFF) would print an elapsed time of zero for
a trigger function, because no measurement has been taken but it printed
the field anyway. This isn't what EXPLAIN does elsewhere, so suppress it.
In the same vein, EXPLAIN (ANALYZE, BUFFERS) with non-text output format
would print buffer I/O timing numbers even when no measurement has been
taken because track_io_timing is off. That seems not per policy, either,
so change it.
Back-patch to 9.2 where these features were introduced.
Maksim Milyutin
Discussion: <081c0540-ecaa-bd29-3fd2-6358f3b359a9@postgrespro.ru>
Commit 874fe3aea changed the command tag returned for CREATE MATVIEW/CREATE
TABLE AS ... WITH NO DATA, but missed that there was code in spi.c that
expected the command tag to always be "SELECT". Fortunately, the
consequence was only an Assert failure, so this oversight should have no
impact in production builds.
Since this code path was evidently un-exercised, add a regression test.
Per report from Shivam Saxena. Back-patch to 9.3, like the previous commit.
Michael Paquier
Report: <97218716-480B-4527-B5CD-D08D798A0C7B@dresources.com>
Several places in NUM_numpart_from_char(), which is called from the SQL
function to_number(text, text), could accidentally read one byte past
the end of the input buffer (which comes from the input text datum and
is not null-terminated).
1. One leading space character would be skipped, but there was no check
that the input was at least one byte long. This does not happen in
practice, but for defensiveness, add a check anyway.
2. Commit 4a3a1e2cf apparently accidentally doubled that code that skips
one space character (so that two spaces might be skipped), but there
was no overflow check before skipping the second byte. Fix by
removing that duplicate code.
3. A logic error would allow a one-byte over-read when looking for a
trailing sign (S) placeholder.
In each case, the extra byte cannot be read out directly, but looking at
it might cause a crash.
The third item was discovered by Piotr Stefaniak, the first two were
found and analyzed by Tom Lane and Peter Eisentraut.
ExecEvalCase() tried to save a cycle or two by passing
&econtext->caseValue_isNull as the isNull argument to its sub-evaluation of
the CASE value expression. If that subexpression itself contained a CASE,
then *isNull was an alias for econtext->caseValue_isNull within the
recursive call of ExecEvalCase(), leading to confusion about whether the
inner call's caseValue was null or not. In the worst case this could lead
to a core dump due to dereferencing a null pointer. Fix by not assigning
to the global variable until control comes back from the subexpression.
Also, avoid using the passed-in isNull pointer transiently for evaluation
of WHEN expressions. (Either one of these changes would have been
sufficient to fix the known misbehavior, but it's clear now that each of
these choices was in itself dangerous coding practice and best avoided.
There do not seem to be any similar hazards elsewhere in execQual.c.)
Also, it was possible for inlining of a SQL function that implements the
equality operator used for a CASE comparison to result in one CASE
expression's CaseTestExpr node being inserted inside another CASE
expression. This would certainly result in wrong answers since the
improperly nested CaseTestExpr would be caused to return the inner CASE's
comparison value not the outer's. If the CASE values were of different
data types, a crash might result; moreover such situations could be abused
to allow disclosure of portions of server memory. To fix, teach
inline_function to check for "bare" CaseTestExpr nodes in the arguments of
a function to be inlined, and avoid inlining if there are any.
Heikki Linnakangas, Michael Paquier, Tom Lane
Report: https://github.com/greenplum-db/gpdb/pull/327
Report: <4DDCEEB8.50602@enterprisedb.com>
Security: CVE-2016-5423
Due to simplistic quoting and confusion of database names with conninfo
strings, roles with the CREATEDB or CREATEROLE option could escalate to
superuser privileges when a superuser next ran certain maintenance
commands. The new coding rule for PQconnectdbParams() calls, documented
at conninfo_array_parse(), is to pass expand_dbname=true and wrap
literal database names in a trivial connection string. Escape
zero-length values in appendConnStrVal(). Back-patch to 9.1 (all
supported versions).
Nathan Bossart, Michael Paquier, and Noah Misch. Reviewed by Peter
Eisentraut. Reported by Nathan Bossart.
Security: CVE-2016-5424
Rename these newly-extern functions with terms more typical of their new
neighbors. No functional changes; a subsequent commit will use them in
more places. Back-patch to 9.1 (all supported versions). Back branches
lack src/fe_utils, so instead rename the functions in place; the
subsequent commit will copy them into the other programs using them.
Security: CVE-2016-5424
The incorrect quoting may have permitted arbitrary command execution.
At a minimum, it gave broader control over the command line to actors
supposed to have control over a single argument. Back-patch to 9.1 (all
supported versions).
Security: CVE-2016-5424
These characters prematurely terminate Windows shell command processing,
causing the shell to execute a prefix of the intended command. The
chief alternative to rejecting these characters was to bypass the
Windows shell with CreateProcess(), but the ability to use such names
has little value. Back-patch to 9.1 (all supported versions).
This change formally revokes support for these characters in database
names and roles names. Don't document this; the error message is
self-explanatory, and too few users would benefit. A future major
release may forbid creation of databases and roles so named. For now,
check only at known weak points in pg_dumpall. Future commits will,
without notice, reject affected names from other frontend programs.
Also extend the restriction to pg_dumpall --dbname=CONNSTR arguments and
--file arguments. Unlike the effects on role name arguments and
database names, this does not reflect a broad policy change. A
migration to CreateProcess() could lift these two restrictions.
Reviewed by Peter Eisentraut.
Security: CVE-2016-5424
These programs nominally accepted conninfo strings, but they would
proceed to use the original dbname parameter as though it were an
unadorned database name. This caused "reindexdb dbname=foo" to issue an
SQL command that always failed, and other programs printed a conninfo
string in error messages that purported to print a database name. Fix
both problems by using PQdb() to retrieve actual database names.
Continue to print the full conninfo string when reporting a connection
failure. It is informative there, and if the database name is the sole
problem, the server-side error message will include the name. Beyond
those user-visible fixes, this allows a subsequent commit to synthesize
and use conninfo strings without that implementation detail leaking into
messages. As a side effect, the "vacuuming database" message now
appears after, not before, the connection attempt. Back-patch to 9.1
(all supported versions).
Reviewed by Michael Paquier and Peter Eisentraut.
Security: CVE-2016-5424
The decision to reuse values of parameters from a previous connection
has been based on whether the new target is a conninfo string. Add this
means of overriding that default. This feature arose as one component
of a fix for security vulnerabilities in pg_dump, pg_dumpall, and
pg_upgrade, so back-patch to 9.1 (all supported versions). In 9.3 and
later, comment paragraphs that required update had already-incorrect
claims about behavior when no connection is open; fix those problems.
Security: CVE-2016-5424
In arguments, these meta-commands wrongly treated each pair as closing
the double quoted string. Make the behavior match the documentation.
This is a compatibility break, but I more expect to find software with
untested reliance on the documented behavior than software reliant on
today's behavior. Back-patch to 9.1 (all supported versions).
Reviewed by Tom Lane and Peter Eisentraut.
Security: CVE-2016-5424
If ANALYZE found no repeated non-null entries in its sample, it set the
column's stadistinct value to -1.0, intending to indicate that the entries
are all distinct. But what this value actually means is that the number
of distinct values is 100% of the table's rowcount, and thus it was
overestimating the number of distinct values by however many nulls there
are. This could lead to very poor selectivity estimates, as for example
in a recent report from Andreas Joseph Krogh. We should discount the
stadistinct value by whatever we've estimated the nulls fraction to be.
(That is what will happen if we choose to use a negative stadistinct for
a column that does have repeated entries, so this code path was just
inconsistent.)
In addition to fixing the stadistinct entries stored by several different
ANALYZE code paths, adjust the logic where get_variable_numdistinct()
forces an "all distinct" estimate on the basis of finding a relevant unique
index. Unique indexes don't reject nulls, so there's no reason to assume
that the null fraction doesn't apply.
Back-patch to all supported branches. Back-patching is a bit of a judgment
call, but this problem seems to affect only a few users (else we'd have
identified it long ago), and it's bad enough when it does happen that
destabilizing plan choices in a worse direction seems unlikely.
Patch by me, with documentation wording suggested by Dean Rasheed
Report: <VisenaEmail.26.df42f82acae38a58.156463942b8@tc7-visena>
Discussion: <16143.1470350371@sss.pgh.pa.us>
This oversight could cause logical decoding to fail to decode an outer
transaction containing changes, if a subtransaction had an XID but no
actual changes. Per bug #14279 from Marko Tiikkaja. Patch by Marko
based on analysis by Andrew Gierth.
Discussion: <20160804191757.1430.39011@wrigleys.postgresql.org>
In _bt_unlink_halfdead_page(), we might fail to find an immediate left
sibling of the target page, perhaps because of corruption of the page
sibling links. The code intends to cope with this by just abandoning
the deletion attempt; but what actually happens is that it fails outright
due to releasing the same buffer lock twice. (And error recovery masks
a second problem, which is possible leakage of a pin on another page.)
Seems to have been introduced by careless refactoring in commit efada2b8e.
Since there are multiple cases to consider, let's make releasing the buffer
lock in the failure case the responsibility of _bt_unlink_halfdead_page()
not its caller.
Also, avoid fetching the leaf page's left-link again after we've dropped
lock on the page. This is probably harmless, but it's not exactly good
coding practice.
Per report from Kyotaro Horiguchi. Back-patch to 9.4 where the faulty code
was introduced.
Discussion: <20160803.173116.111915228.horiguchi.kyotaro@lab.ntt.co.jp>
Beginning with the next development cycle, PG servers will report two-part
not three-part version numbers. Fix libpq so that it will compute the
correct numeric representation of such server versions for reporting by
PQserverVersion(). It's desirable to get this into the field and
back-patched ASAP, so that older clients are more likely to understand the
new server version numbering by the time any such servers are in the wild.
(The results with an old client would probably not be catastrophic anyway
for a released server; for example "10.1" would be interpreted as 100100
which would be wrong in detail but would not likely cause an old client to
misbehave badly. But "10devel" or "10beta1" would result in sversion==0
which at best would result in disabling all use of modern features.)
Extracted from a patch by Peter Eisentraut; comments added by me
Patch: <802ec140-635d-ad86-5fdf-d3af0e260c22@2ndquadrant.com>
DST law changes in Kemerovo and Novosibirsk. Historical corrections for
Azerbaijan, Belarus, and Morocco. Asia/Novokuznetsk and Asia/Novosibirsk
now use numeric time zone abbreviations instead of invented ones. Zones
for Antarctic bases and other locations that have been uninhabited for
portions of the time span known to the tzdata database now report "-00"
rather than "zzz" as the zone abbreviation for those time spans.
Also, I decided to remove some of the timezone/data/ files that we don't
use. At one time that subdirectory was a complete copy of what IANA
distributes in the tzdata tarballs, but that hasn't been true for a long
time. There seems no good reason to keep shipping those specific files
but not others; they're just bloating our tarballs.
Since -c plus -C requests dropping and recreating the target database
as a whole, not dropping individual objects in it, we should assume that
the public schema already exists and need not be created. The previous
coding considered only the state of the -c option, so it would emit
"CREATE SCHEMA public" anyway, leading to an unexpected error in restore.
Back-patch to 9.2. Older versions did not accept -c with -C so the
issue doesn't arise there. (The logic being patched here dates to 8.0,
cf commit 2193121fa, so it's not really wrong that it didn't consider
the case at the time.)
Note that versions before 9.6 will still attempt to emit REVOKE/GRANT
on the public schema; but that happens without -c/-C too, and doesn't
seem to be the focus of this complaint. I considered extending this
stanza to also skip the public schema's ACL, but that would be a
misfeature, as it'd break cases where users intentionally changed that
ACL. The real fix for this aspect is Stephen Frost's work to not dump
built-in ACLs, and that's not going to get back-ported.
Per bugs #13804 and #14271. Solution found by David Johnston and later
rediscovered by me.
Report: <20151207163520.2628.95990@wrigleys.postgresql.org>
Report: <20160801021955.1430.47434@wrigleys.postgresql.org>
This coding pattern creates a race condition, because if an interesting
interrupt happens after we've checked InterruptPending but before we reset
our latch, the latch-setting done by the signal handler would get lost,
and then we might block at WaitLatch in the next iteration without ever
noticing the interrupt condition. You can put the CHECK_FOR_INTERRUPTS
before WaitLatch or after ResetLatch, but not between them.
Aside from fixing the bugs, add some explanatory comments to latch.h
to perhaps forestall the next person from making the same mistake.
In HEAD, also replace gather_readnext's direct call of
HandleParallelMessages with CHECK_FOR_INTERRUPTS. It does not seem clean
or useful for this one caller to bypass ProcessInterrupts and go straight
to HandleParallelMessages; not least because that fails to consider the
InterruptPending flag, resulting in useless work both here
(if InterruptPending isn't set) and in the next CHECK_FOR_INTERRUPTS call
(if it is).
This thinko seems to have been introduced in the initial coding of
storage/ipc/shm_mq.c (commit ec9037df2), and then blindly copied into all
the subsequent parallel-query support logic. Back-patch relevant hunks
to 9.4 to extirpate the error everywhere.
Discussion: <1661.1469996911@sss.pgh.pa.us>
The help message for pg_basebackup specifies that the numbers 0 through 9
are accepted as valid values of -Z option. But, previously -Z 0 was rejected
as an invalid compression level.
Per discussion, it's better to make pg_basebackup treat 0 as valid
compression level meaning no compression, like pg_dump.
Back-patch to all supported versions.
Reported-By: Jeff Janes
Reviewed-By: Amit Kapila
Discussion: CAMkU=1x+GwjSayc57v6w87ij6iRGFWt=hVfM0B64b1_bPVKRqg@mail.gmail.com
This text was added by commit ff213239c, and not long thereafter obsoleted
by commit 4adc2f72a (which made the test depend on NBuffers instead); but
nobody noticed the need for an update. Commit 9563d5b5e adds some further
dependency on maintenance_work_mem, but the existing verbiage seems to
cover that with about as much precision as we really want here. Let's
just take it all out rather than leaving ourselves open to more errors of
omission in future. (That solution makes this change back-patchable, too.)
Noted by Peter Geoghegan.
Discussion: <CAM3SWZRVANbj9GA9j40fAwheQCZQtSwqTN1GBTVwRrRbmSf7cg@mail.gmail.com>
Per the fgets() specification, it cannot return without reading some data
unless it reports EOF or error. So the code here assumed that the data
buffer would necessarily be nonempty when we go to check for a newline
having been read. However, Agostino Sarubbo noticed that this could fail
to be true if the first byte of the data is a NUL (\0). The fgets() API
doesn't really work for embedded NULs, which is something I don't feel
any great need for us to worry about since we generally don't allow NULs
in SQL strings anyway. But we should not access off the end of our own
buffer if the case occurs. Normally this would just be a harmless read,
but if you were unlucky the byte before the buffer would contain '\n'
and we'd overwrite it with '\0', and if you were really unlucky that
might be valuable data and psql would crash.
Agostino reported this to pgsql-security, but after discussion we concluded
that it isn't worth treating as a security bug; if you can control the
input to psql you can do far more interesting things than just maybe-crash
it. Nonetheless, it is a bug, so back-patch to all supported versions.
Commits 4452000f3 et al established semantics for NullTest.argisrow that
are a bit different from its initial conception: rather than being merely
a cache of whether we've determined the input to have composite type,
the flag now has the further meaning that we should apply field-by-field
testing as per the standard's definition of IS [NOT] NULL. If argisrow
is false and yet the input has composite type, the construct instead has
the semantics of IS [NOT] DISTINCT FROM NULL. Update the comments in
primnodes.h to clarify this, and fix ruleutils.c and deparse.c to print
such cases correctly. In the case of ruleutils.c, this merely results in
cosmetic changes in EXPLAIN output, since the case can't currently arise
in stored rules. However, it represents a live bug for deparse.c, which
would formerly have sent a remote query that had semantics different
from the local behavior. (From the user's standpoint, this means that
testing a remote nested-composite column for null-ness could have had
unexpected recursive behavior much like that fixed in 4452000f3.)
In a related but somewhat independent fix, make plancat.c set argisrow
to false in all NullTest expressions constructed to represent "attnotnull"
constructs. Since attnotnull is actually enforced as a simple null-value
check, this is a more accurate representation of the semantics; we were
previously overpromising what it meant for composite columns, which might
possibly lead to incorrect planner optimizations. (It seems that what the
SQL spec expects a NOT NULL constraint to mean is an IS NOT NULL test, so
arguably we are violating the spec and should fix attnotnull to do the
other thing. If we ever do, this part should get reverted.)
Back-patch, same as the previous commit.
Discussion: <10682.1469566308@sss.pgh.pa.us>
The docs failed to explain that LIKE INCLUDING INDEXES would not preserve
the names of indexes and associated constraints. Also, it wasn't mentioned
that EXCLUDE constraints would be copied by this option. The latter
oversight seems enough of a documentation bug to justify back-patching.
In passing, do some minor copy-editing in the same area, and add an entry
for LIKE under "Compatibility", since it's not exactly a faithful
implementation of the standard's feature.
Discussion: <20160728151154.AABE64016B@smtp.hushmail.com>
start_postmaster() registered stop_postmaster_atexit as an atexit(3)
callback each time through, although the obvious intention was to do
so only once per program run. The extra registrations were harmless,
so long as we didn't exceed ATEXIT_MAX, but still it's a bug.
Artur Zakirov, with bikeshedding by Kyotaro Horiguchi and me
Discussion: <d279e817-02b5-caa6-215f-cfb05dce109a@postgrespro.ru>
The description of udt_privileges view contained an incorrect copy-pasted word.
Back-patch to 9.2 where udt_privileges view was added.
Author: Alexander Law
The SQL standard appears to specify that IS [NOT] NULL's tests of field
nullness are non-recursive, ie, we shouldn't consider that a composite
field with value ROW(NULL,NULL) is null for this purpose.
ExecEvalNullTest got this right, but eval_const_expressions did not,
leading to weird inconsistencies depending on whether the expression
was such that the planner could apply constant folding.
Also, adjust the docs to mention that IS [NOT] DISTINCT FROM NULL can be
used as a substitute test if a simple null check is wanted for a rowtype
argument. That motivated reordering things so that IS [NOT] DISTINCT FROM
is described before IS [NOT] NULL. In HEAD, I went a bit further and added
a table showing all the comparison-related predicates.
Per bug #14235. Back-patch to all supported branches, since it's certainly
undesirable that constant-folding should change the semantics.
Report and patch by Andrew Gierth; assorted wordsmithing and revised
regression test cases by me.
Report: <20160708024746.1410.57282@wrigleys.postgresql.org>
In btree_gin and citext, avoid some not-particularly-interesting
dependencies on the sorting of 'aa'. In tsearch2, use COLLATE "C" to
remove an uninteresting dependency on locale sort order (and thereby
allow removal of a variant expected-file).
Also, in citext, avoid assuming that lower('I') = 'i'. This isn't relevant
to Danish but it does fail in Turkish.
Another peculiarity of Danish locale is that it has an unusual idea
of how to sort upper vs. lower case. One of the pltcl test cases has
an issue with that. Now that COLLATE works in all supported branches,
we can just change the test to be locale-independent, and get rid of
the variant expected file that used to support non-C locales.
runtime.sgml used to contain a table of estimated shared memory consumption
rates for max_connections and some other GUCs. Commit 390bfc643 removed
that on the well-founded grounds that (a) we weren't maintaining the
entries well and (b) it no longer mattered so much once we got out from
under SysV shmem limits. But it missed that there were even-more-obsolete
versions of some of those numbers in comments in postgresql.conf.sample.
Remove those too. Back-patch to 9.3 where the aforesaid commit went in.