A query such as "SELECT x UNION SELECT y UNION SELECT z UNION ..."
produces a left-deep nested parse tree, which we formerly showed in its
full nested glory and with all the possible parentheses. This does little
for readability, though, and long UNION lists resulting in excessive
indentation are common. Instead, let's omit parentheses and indent all
the subqueries at the same level in such cases.
This patch skips indentation/parenthesization whenever the lefthand input
of a SetOperationStmt is another SetOperationStmt of the same kind and
ALL/DISTINCT property. We could teach the code the exact syntactic
precedence of set operations and thereby avoid parenthesization in some
more cases, but it's not clear that that'd be a readability win: it seems
better to parenthesize if the set operation changes. (As an example,
if there's one UNION in a long list of UNION ALL, it now stands out like
a sore thumb, which seems like a good thing.)
Back-patch to 9.3. This completes our response to a complaint from Greg
Stark that since commit 62e666400d there's a performance problem in pg_dump
for views containing long UNION sequences (or other types of deeply nested
constructs). The previous commit 0601cb54dac14d979d726ab2ebeda251ae36e857
handles the general problem, but this one makes the specific case of UNION
lists look a lot nicer.
Continuing to indent no matter how deeply nested we get doesn't really
do anything for readability; what's worse, it results in O(N^2) total
whitespace, which can become a performance and memory-consumption issue.
To address this, once we get past 40 characters of indentation, reduce
the indentation step distance 4x, and also limit the maximum indentation
by reducing it modulo 40. This latter choice is a bit weird at first
glance, but it seems to preserve readability better than a simple cap
would do.
Back-patch to 9.3, because since commit 62e666400d the performance issue
is a hazard for pg_dump.
Greg Stark and Tom Lane
The code attempted to outdent JOIN clauses further left than the parent
FROM keyword, which was odd in any case, and led to inconsistent formatting
since in simple cases the clauses couldn't be moved any further left than
that. And it left a permanent decrement of the indentation level, causing
subsequent lines to be much further left than they should be (again, this
couldn't be seen in simple cases for lack of indentation to give up).
After a little experimentation I chose to make it indent JOIN keywords
two spaces from the parent FROM, which is one space more than the join's
lefthand input in cases where that appears on a different line from FROM.
Back-patch to 9.3. This is a purely cosmetic change, and the bug is quite
old, so that may seem arbitrary; but we are going to be making some other
changes to the indentation behavior in both HEAD and 9.3, so it seems
reasonable to include this in 9.3 too. I committed this one first because
its effects are more visible in the regression test results as they
currently stand than they will be later.
Some popen() calls were missing SYSTEMQUOTEs, which caused initdb and
pg_upgrade to fail on Windows, if the installation path contained both
spaces and @ signs.
Patch by Nikhil Deshpande. Backpatch to all supported versions.
In general we can't discard constant-NULL inputs, since they could change
the result of the AND/OR to be NULL. But at top level of WHERE, we do not
need to distinguish a NULL result from a FALSE result, so it's okay to
treat NULL as FALSE and then simplify AND/OR accordingly.
This is a very ancient oversight, but in 9.2 and later it can lead to
failure to optimize queries that previous releases did optimize, as a
result of more aggressive parameter substitution rules making it possible
to reduce more subexpressions to NULL constants. This is the root cause of
bug #10171 from Arnold Scheffler. We could alternatively have fixed that
by teaching orclauses.c to ignore constant-NULL OR arms, but it seems
better to get rid of them globally.
I resisted the temptation to back-patch this change into all active
branches, but it seems appropriate to back-patch as far as 9.2 so that
there will not be performance regressions of the kind shown in this bug.
In writeListPage, never take a full-page image of the page, because we
have all the information required to re-initialize in the WAL record
anyway. Before this fix, a full-page image was always generated, unless
full_page_writes=off, because when the page is initialized its LSN is
always 0. In stable-branches, keep the code to restore the backup blocks
if they exist, in case that the WAL is generated with an older minor
version, but in master Assert that there are no full-page images.
In the redo routine, add missing "off++". Otherwise the tuples are added
to the page in reverse order. That happens to be harmless because we
always scan and remove all the tuples together, but it was clearly wrong.
Also, it was masked by the first bug unless full_page_writes=off, because
the page was always restored from a full-page image.
Backpatch to all supported versions.
pg_controldata includes postgres.h not postgres_fe.h, so utils/palloc.h
must be able to compile in a "#define FRONTEND" context. It appears that
Solaris Studio is smart enough to persuade us to define PG_USE_INLINE,
but not smart enough to not make a copy of unreferenced static functions;
which leads to an unsatisfied reference to CurrentMemoryContext. So we
need an #ifndef FRONTEND around that declaration. Per buildfarm.
This breaks the principle that common/ ought not depend on anything in the
server, not only code-wise but in the headers. The only arguable advantage
is avoidance of duplication of half a dozen extern declarations, and even
that is rather dubious, considering that the previous coding was wrong
about which declarations to duplicate: it exposed pnstrdup() to frontend
code even though no such function is provided in fe_memutils.c.
On the same principle, don't #include utils/memutils.h in the frontend
build of psprintf.c. This requires duplicating the definition of
MaxAllocSize, but that seems fine to me: there's no a-priori reason why
frontend code should use the same size limit as the backend anyway.
In passing, clean up some rather odd layout and ordering choices that
were imposed on palloc.h to reduce the number of #ifdefs required by
the previous approach.
Per gripe from Christoph Berg. There's still more work to do to make
include/common/ clean, but this part seems reasonably noncontroversial.
If a tuple is locked, and this lock is later upgraded either to an
update or to a stronger lock, and in the meantime some other process
tries to lock, update or delete the same tuple, it (the tuple) could end
up being updated twice, or having conflicting locks held.
The reason for this is that the second updater checks for a change in
Xmax value, or in the HEAP_XMAX_IS_MULTI infomask bit, after noticing
the first lock; and if there's a change, it restarts and re-evaluates
its ability to update the tuple. But it neglected to check for changes
in lock strength or in lock-vs-update status when those two properties
stayed the same. This would lead it to take the wrong decision and
continue with its own update, when in reality it shouldn't do so but
instead restart from the top.
This could lead to either an assertion failure much later (when a
multixact containing multiple updates is detected), or duplicate copies
of tuples.
To fix, make sure to compare the other relevant infomask bits alongside
the Xmax value and HEAP_XMAX_IS_MULTI bit, and restart from the top if
necessary.
Also, in the belt-and-suspenders spirit, add a check to
MultiXactCreateFromMembers that a multixact being created does not have
two or more members that are claimed to be updates. This should protect
against other bugs that might cause similar bogus situations.
Backpatch to 9.3, where the possibility of multixacts containing updates
was introduced. (In prior versions it was possible to have the tuple
lock upgraded from shared to exclusive, and an update would not restart
from the top; yet we're protected against a bug there because there's
always a sleep to wait for the locking transaction to complete before
continuing to do anything. Really, the fact that tuple locks always
conflicted with concurrent updates is what protected against bugs here.)
Per report from Andrew Dunstan and Josh Berkus in thread at
http://www.postgresql.org/message-id/534C8B33.9050807@pgexperts.com
Bug analysis by Andres Freund.
Once we've completed a PREPARE, our session is not running a transaction,
so its entry in pg_stat_activity should show xact_start as null, rather
than leaving the value as the start time of the now-prepared transaction.
I think possibly this oversight was triggered by faulty extrapolation
from the adjacent comment that says PrepareTransaction should not call
AtEOXact_PgStat, so tweak the wording of that comment.
Noted by Andres Freund while considering bug #10123 from Maxim Boguk,
although this error doesn't seem to explain that report.
Back-patch to all active branches.
pg_sequence_parameters() and pg_identify_object() have had incorrect
proallargtypes entries since 9.1 and 9.3 respectively. This was mostly
masked by the correct information in proargtypes, but a few operations
such as pg_get_function_arguments() (and thus psql's \df display) would
show the wrong data types for these functions' input parameters.
In HEAD, fix the wrong info, bump catversion, and add an opr_sanity
regression test to catch future mistakes of this sort.
In the back branches, just fix the wrong info so that installations
initdb'd with future minor releases will have the right data. We
can't force an initdb, and it doesn't seem like a good idea to add
a regression test that will fail on existing installations.
Andres Freund
pgss_post_parse_analyze() neglected to pass the call on to any earlier
occupant of the post_parse_analyze_hook. There are no other users of that
hook in contrib/, and most likely none in the wild either, so this is
probably just a latent bug. But it's a bug nonetheless, so back-patch
to 9.2 where this code was introduced.
Introduced in 585bca39: msgid is not used in the Windows code path.
Also adjust comments a tad (mostly to keep pgindent from messing it up).
David Rowley
Specifically, on-stack memset() might be removed, so:
* Replace memset() with px_memset()
* Add px_memset to copy_crlf()
* Add px_memset to pgp-s2k.c
Patch by Marko Kreen
Report by PVS-Studio
Backpatch through 8.4.
Non-existent tablespace directory references can occur if user
tablespaces are created inside data directories and the data directory
is renamed in preparation for running pg_upgrade, and the symbolic links
are not updated.
Backpatch to 9.3.
If we set the all-visible flag after writing WAL record, and XLogInsert
takes a full-page image of the page, the image would not include the flag.
We will then proceed to set the VM bit, which would then be set without the
corresponding all-visible flag on the heap page.
Found by comparing page images on master and standby, after writing/replaying
each WAL record. (There is still a discrepancy: the all-visible flag won't
be set after replaying the HEAP_CLEAN record, even though it is set in the
master. However, it will be set when replaying the HEAP2_VISIBLE record and
setting the VM bit, so the all-visible flag and VM bit are always consistent
on the standby, even though they are momentarily out-of-sync with master)
Backpatch to 9.3 where this code was introduced.
We were emitting "(SELECT null::typename)", which is usually interpreted
as a scalar subselect, but not so much in the context "x = ANY(...)".
This led to remote-side parsing failures when remote_estimate is enabled.
A quick and ugly fix is to stick in an extra cast step,
"((SELECT null::typename)::typename)". The cast will be thrown away as
redundant by parse analysis, but not before it's done its job of making
sure the grammar sees the ANY argument as an a_expr rather than a
select_with_parens. Per an example from Hannu Krosing.
This has probably been broken for quite a long time. Buildfarm member
currawong's current results suggest that it's been broken since 9.1, so
backpatch this to that branch.
This only supports Python 2 - I will handle Python 3 separately, but
this is a fairly simple fix.
According to the Single Unix Spec and assorted man pages, you're supposed
to use the constants named AF_xxx when setting ai_family for a getaddrinfo
call. In a few places we were using PF_xxx instead. Use of PF_xxx
appears to be an ancient BSD convention that was not adopted by later
standardization. On BSD and most later Unixen, it doesn't matter much
because those constants have equivalent values anyway; but nonetheless
this code is not per spec.
In the same vein, replace PF_INET by AF_INET in one socket() call, which
wasn't even consistent with the other socket() call in the same function
let alone the remainder of our code.
Per investigation of a Cygwin trouble report from Marco Atzeri. It's
probably a long shot that this will fix his issue, but it's wrong in
any case.
Bind attempts to an LDAP server should time out after two seconds,
allowing additional lines in the service control file to be parsed
(which provide a fall back to a secondary LDAP server or default options).
The existing code failed to enforce that timeout during TCP connect,
resulting in a hang far longer than two seconds if the LDAP server
does not respond.
Laurenz Albe
Previously, in some places, socket creation errors were checked for
negative values, which is not true for Windows because sockets are
unsigned. This masked socket creation errors on Windows.
Backpatch through 9.0. 8.4 doesn't have the infrastructure to fix this.
I mixed up BLCKSZ and XLOG_BLCKSZ when I changed the way the buffer is
allocated a couple of weeks ago. With the default settings, they are both
8k, but they can be changed at compile-time.
Don't reset the rightlink of a page when replaying a page update record.
This was a leftover from pre-hot standby days, when it was not possible to
have scans concurrent with WAL replay. Resetting the right-link was not
necessary back then either, but it was done for the sake of tidiness. But
with hot standby, it's wrong, because a concurrent scan might still need it.
Backpatch all versions with hot standby, 9.0 and above.
The one existing assertion of this type has tripped a few times in the
buildfarm lately, but it's not clear whether the problem is really
originating there or whether it's leftovers from a trip through one
of the other two paths that lack a matching assertion. So add one.
Since the same bug(s) most likely exist(s) in the back-branches also,
back-patch to 9.2, where the fast-path lock mechanism was added.
Formerly, we set up the postmaster's signal handling only when we were
about to start launching subprocesses. This is a bad idea though, as
it means that for example a SIGINT arriving before that will kill the
postmaster instantly, perhaps leaving lockfiles, socket files, shared
memory, etc laying about. We'd rather that such a signal caused orderly
postmaster termination including releasing of those resources. A simple
fix is to move the PostmasterMain stanza that initializes signal handling
to an earlier point, before we've created any such resources. Then, an
early-arriving signal will be blocked until we're ready to deal with it
in the usual way. (The only part that really needs to be moved up is
blocking of signals, but it seems best to keep the signal handler
installation calls together with that; for one thing this ensures the
kernel won't drop any signals we wished to get. The handlers won't get
invoked in any case until we unblock signals in ServerLoop.)
Per a report from MauMau. He proposed changing the way "pg_ctl stop"
works to deal with this, but that'd just be masking one symptom not
fixing the core issue.
It's been like this since forever, so back-patch to all supported branches.
EXEC_BACKEND builds (i.e., Windows) failed to absorb values of PGC_BACKEND
parameters if they'd been changed post-startup via the config file. This
for example prevented log_connections from working if it were turned on
post-startup. The mechanism for handling this case has always been a bit
of a kluge, and it wasn't revisited when we implemented EXEC_BACKEND.
While in a normal forking environment new backends will inherit the
postmaster's value of such settings, EXEC_BACKEND backends have to read
the settings from the CONFIG_EXEC_PARAMS file, and they were mistakenly
rejecting them. So this case has always been broken in the Windows port;
so back-patch to all supported branches.
Amit Kapila
The code segment that removes the old symlink (if present) wasn't clued
into the fact that on Windows, symlinks are junction points which have
to be removed with rmdir().
Backpatch to 9.0, where the failing code was introduced.
MauMau, reviewed by Muhammad Asif Naeem and Amit Kapila
There's no really compelling reason to refuse to do these read-only,
non-server-starting options as root, and there's at least one good
reason to allow -C: pg_ctl uses -C to find out the true data directory
location when pointed at a config-only directory. On Windows, this is
done before dropping administrator privileges, which means that pg_ctl
fails for administrators if and only if a config-only layout is used.
Since the root-privilege check is done so early in startup, it's a bit
awkward to check for these switches. Make the somewhat arbitrary
decision that we'll only skip the root check if -C is the first switch.
This is not just to make the code a bit simpler: it also guarantees that
we can't misinterpret a --boot mode switch. (While AuxiliaryProcessMain
doesn't currently recognize any such switch, it might have one in the
future.) This is no particular problem for pg_ctl, and since the whole
behavior is undocumented anyhow, it's not a documentation issue either.
(--describe-config only works as the first switch anyway, so this is
no restriction for that case either.)
Back-patch to 9.2 where pg_ctl first began to use -C.
MauMau, heavily edited by me
The displayed sendtime and receipttime were always exactly equal, because
somebody forgot that timestamptz_to_str returns a static buffer (thereby
simplifying life for most callers, at the cost of complicating it for those
who need two results concurrently). Apply the same pstrdup solution used
by the other call sites with this issue. Back-patch to 9.2 where the
faulty code was introduced. Per bug #9849 from Haruka Takatsuka, though
this is not exactly his patch.
Possibly we should change timestamptz_to_str's API, but I wouldn't want
to do so in the back branches.
For variadic functions (other than VARIADIC ANY), the syntaxes foo(x,y,...)
and foo(VARIADIC ARRAY[x,y,...]) should be considered equivalent, since the
former is converted to the latter at parse time. They have indeed been
equivalent, in all releases before 9.3. However, commit 75b39e790 made an
ill-considered decision to record which syntax had been used in FuncExpr
nodes, and then to make equal() test that in checking node equality ---
which caused the syntaxes to not be seen as equivalent by the planner.
This is the underlying cause of bug #9817 from Dmitry Ryabov.
It might seem that a quick fix would be to make equal() disregard
FuncExpr.funcvariadic, but the same commit made that untenable, because
the field actually *is* semantically significant for some VARIADIC ANY
functions. This patch instead adopts the approach of redefining
funcvariadic (and aggvariadic, in HEAD) as meaning that the last argument
is a variadic array, whether it got that way by parser intervention or was
supplied explicitly by the user. Therefore the value will always be true
for non-ANY variadic functions, restoring the principle of equivalence.
(However, the planner will continue to consider use of VARIADIC as a
meaningful difference for VARIADIC ANY functions, even though some such
functions might disregard it.)
In HEAD, this change lets us simplify the decompilation logic in
ruleutils.c, since the funcvariadic/aggvariadic flag tells directly whether
to print VARIADIC. However, in 9.3 we have to continue to cope with
existing stored rules/views that might contain the previous definition.
Fortunately, this just means no change in ruleutils.c, since its existing
behavior effectively ignores funcvariadic for all cases other than VARIADIC
ANY functions.
In HEAD, bump catversion to reflect the fact that FuncExpr.funcvariadic
changed meanings; this is sort of pro forma, since I don't believe any
built-in views are affected.
Unfortunately, this patch doesn't magically fix everything for affected
9.3 users. After installing 9.3.5, they might need to recreate their
rules/views/indexes containing variadic function calls in order to get
everything consistent with the new definition. As in the cited bug,
the symptom of a problem would be failure to use a nominally matching
index that has a variadic function call in its definition. We'll need
to mention this in the 9.3.5 release notes.
The advice to join to pg_prepared_xacts via the transaction column was not
updated when the transaction column was replaced by virtualtransaction.
Since it's not quite obvious how to do that join, give an explicit example.
For consistency also give an example for the adjacent case of joining to
pg_stat_activity. And link-ify the view references too, just because we
can. Per bug #9840 from Alexey Bashtanov.
Michael Paquier and Tom Lane
It's been 16 bytes, not 12, for ages. This was fixed in passing in HEAD
(commit 146604ec), but as a factual error it should have been back-patched.
Per gripe from Tatsuhito Kasahara.
Memory allocation can fail if you run out of memory, and inside a critical
section that will lead to a PANIC. Use conservatively-sized arrays in stack
instead.
There was previously no explicit limit on the number of pages a GiST split
can produce, it was only limited by the number of LWLocks that can be held
simultaneously (100 at the moment). This patch adds an explicit limit of 75
pages. That should be plenty, a typical split shouldn't produce more than
2-3 page halves.
The bug has been there forever, but only backpatch down to 9.1. The code
was changed significantly in 9.1, and it doesn't seem worth the risk or
trouble to adapt this for 9.0 and 8.4.
The code for matching clients to pg_hba.conf lines that specify host names
(instead of IP address ranges) failed to complain if reverse DNS lookup
failed; instead it silently didn't match, so that you might end up getting
a surprising "no pg_hba.conf entry for ..." error, as seen in bug #9518
from Mike Blackwell. Since we don't want to make this a fatal error in
situations where pg_hba.conf contains a mixture of host names and IP
addresses (clients matching one of the numeric entries should not have to
have rDNS data), remember the lookup failure and mention it as DETAIL if
we get to "no pg_hba.conf entry". Apply the same approach to forward-DNS
lookup failures, too, rather than treating them as immediate hard errors.
Along the way, fix a couple of bugs that prevented us from detecting an
rDNS lookup error reliably, and make sure that we make only one rDNS lookup
attempt; formerly, if the lookup attempt failed, the code would try again
for each host name entry in pg_hba.conf. Since more or less the whole
point of this design is to ensure there's only one lookup attempt not one
per entry, the latter point represents a performance bug that seems
sufficient justification for back-patching.
Also, adjust src/port/getaddrinfo.c so that it plays as well as it can
with this code. Which is not all that well, since it does not have actual
support for rDNS lookup, but at least it should return the expected (and
required by spec) error codes so that the main code correctly perceives the
lack of functionality as a lookup failure. It's unlikely that PG is still
being used in production on any machines that require our getaddrinfo.c,
so I'm not excited about working harder than this.
To keep the code in the various branches similar, this includes
back-patching commits c424d0d1052cb4053c8712ac44123f9b9a9aa3f2 and
1997f34db4687e671690ed054c8f30bb501b1168 into 9.2 and earlier.
Back-patch to 9.1 where the facility for hostnames in pg_hba.conf was
introduced.
Initialization of this field was not being done according to the
st_changecount protocol (it has to be done within the changecount increment
range, not outside). And the test to see if the value should be reported
as null was wrong. Noted while perusing uses of Port.remote_hostname.
This was wrong from the introduction of this code (commit 4a25bc145),
so back-patch to 9.1.
Otherwise, the compiler might decide to move modifications to data
within this structure outside the enclosing SpinLockAcquire /
SpinLockRelease pair, leading to shared memory corruption.
This may or may not explain a recent lmgr-related buildfarm failure
on prairiedog, but it needs to be fixed either way.
Previously, such buffers weren't counted, with the possible result that
EXPLAIN (BUFFERS) and pg_stat_statements would understate the true
number of blocks dirtied by an SQL statement.
Back-patch to 9.2, where this counter was introduced.
Amit Kapila
It is possible for a view or materialized view to depend on a table's
primary key, if the view query relies on functional dependency to
abbreviate a GROUP BY list. This is problematic for pg_dump since we
ordinarily want to dump view definitions in the pre-data section but
indexes in post-data. pg_dump knows how to deal with this situation for
regular views, by breaking the view's ON SELECT rule apart from the view
proper. But it had not been taught what to do about materialized views,
and in fact mistakenly dumped them as regular views in such cases, as
seen in bug #9616 from Jesse Denardo.
If we had CREATE OR REPLACE MATERIALIZED VIEW, we could fix this in a
manner analogous to what's done for regular views; but we don't yet,
and we'd not back-patch such a thing into 9.3 anyway. As a hopefully-
temporary workaround, break the circularity by postponing the matview
into post-data altogether when this case occurs.
Any OS user able to access the socket can connect as the bootstrap
superuser and in turn execute arbitrary code as the OS user running the
test. Protect against that by placing the socket in the temporary data
directory, which has mode 0700 thanks to initdb. Back-patch to 8.4 (all
supported versions). The hazard remains wherever the temporary cluster
accepts TCP connections, notably on Windows.
Attempts to run "make check" from a directory with a long name will now
fail. An alternative not sharing that problem was to place the socket
in a subdirectory of /tmp, but that is only secure if /tmp is sticky.
The PG_REGRESS_SOCK_DIR environment variable is available as a
workaround when testing from long directory paths.
As a convenient side effect, this lets testing proceed smoothly in
builds that override DEFAULT_PGSOCKET_DIR. Popular non-default values
like /var/run/postgresql are often unwritable to the build user.
Security: CVE-2014-0067
This reverts commit 2a33b796ee27732c1854f41594329afbb9ffba3d.
The change requiring cdecimal is new in 9.4 (see 7919398bac),
so we should not claim previous branches need it.