This reverts commit 458a1244f1fcf407874482a93b7631ecf5303d6e.
It has portability problems on Windows, which will require
a little bit of research to fix.
Discussion: https://postgr.es/m/20202.1548035461@sss.pgh.pa.us
Previously, in set_append_rel_size(), we generated tlists and EC members
for dummy children for possible use by partition-wise join, even if
partition-wise join was disabled or the top parent was not a partitioned
table, but adding such EC members causes noticeable planning speed
degradation for queries with certain kinds of join quals like
"(foo.x + bar.y) = constant" where foo and bar are partitioned tables in
cases where there are lots of dummy children, as the EC members lists
grow huge, especially for the ECs derived from such join quals, which
makes the search for the parent EC members in add_child_rel_equivalences()
very time-consuming. Postpone the work until such children are actually
involved in a partition-wise join.
Reported-by: Sanyo Capobiango
Analyzed-by: Justin Pryzby and Alvaro Herrera
Author: Amit Langote, with a few additional changes by me
Reviewed-by: Ashutosh Bapat
Backpatch-through: v11 where partition-wise join was added
Discussion: https://postgr.es/m/CAO698qZnrxoZu7MEtfiJmpmUtz3AVYFVnwzR%2BpqjF%3DrmKBTgpw%40mail.gmail.com
This reverts commit bf070ce09e05943d6484de0ec17c7b02f2690a6d.
Per discussion, it's not desirable to add valgrind suppressions for
outside our own code base (e.g. glibc in this case), especially when
the suppressions may be platform-specific. There are better ways to
deal with that, e.g. by providing local suppressions.
Discussion: https://www.postgresql.org/message-id/flat/90ac0452-e907-e7a4-b3c8-15bd33780e62%402ndquadrant.com
Recent OpenBSD (at least 5.9 and up) has a version of getopt(3)
that will not cope with the "-:" spec we use to accept double-dash
options in postgres.c and postmaster.c. Admittedly, that's a hack
because POSIX only requires getopt() to allow alphanumeric option
characters. I have no desire to find another way, however, so
let's just do what we were already doing on Solaris: force use
of our own src/port/getopt.c implementation.
In passing, improve some of the comments around said implementation.
Per buildfarm and local testing. Back-patch to all supported branches.
Discussion: https://postgr.es/m/30197.1547835700@sss.pgh.pa.us
When creating a foreign key in a partitioned table, if some partitions
already have equivalent constraints, we wastefully create duplicates of
the constraints instead of attaching to the existing ones. That's
inconsistent with the de-duplication that is applied when a table is
attached as a partition. To fix, reuse the FK-cloning code instead of
having a separate code path.
Backpatch to Postgres 11. This is a subtle behavior change, but surely
a welcome one since there's no use in having duplicate foreign keys.
Discovered by Álvaro Herrera while thinking about a different problem
reported by Jesper Pedersen (bug #15587).
Author: Álvaro Herrera
Discussion: https://postgr.es/m/201901151935.zfadrzvyof4k@alvherre.pgsql
My commit 3de241dba86f introduced some code to create a clone of a
foreign key to a partition, but I put it in pg_constraint.c because it
was too close to the contents of the pg_constraint row. With the
previous commit that split out the constraint tuple deconstruction into
its own routine, it makes more sense to have the FK-cloning function in
tablecmds.c, mostly because its static subroutine can then be used by a
future bugfix.
My initial posting of this patch had this routine as static in
tablecmds.c, but sadly this function is already part of the Postgres 11
ABI as exported from pg_constraint.c, so keep it as exported also just
to avoid breaking any possible users of it.
My commit 3de241dba86f introduced some code (in tablecmds.c) to obtain
data from a pg_constraint row for a foreign key, that already existed in
ri_triggers.c. Split it out into its own routine in pg_constraint.c,
where it naturally belongs.
No functional code changes, only code movement.
Backpatch to pg11, because a future bugfix is simpler after this.
current_schema() gets called in the recently-added regression test from
c5660e0, and can be used in a parallel context, causing its call to fail
when creating a temporary schema.
Per buildfarm members crake and lapwing.
Discussion: https://postgr.es/m/20190118005949.GD1883@paquier.xyz
I've had enough of "fixing" this test case. Whatever value it has
is limited to verifying that pgbench fails for an unrecognized switch,
and we don't need to assume anything about what getopt_long prints in
order to do that.
Discussion: https://postgr.es/m/9427.1547701450@sss.pgh.pa.us
Attempting to use a temporary table within a two-phase transaction is
forbidden for ages. However, there have been uncovered grounds for
a couple of other object types and commands which work on temporary
objects with two-phase commit. In short, trying to create, lock or drop
an object on a temporary schema should not be authorized within a
two-phase transaction, as it would cause its state to create
dependencies with other sessions, causing all sorts of side effects with
the existing session or other sessions spawned later on trying to use
the same temporary schema name.
Regression tests are added to cover all the grounds found, the original
report mentioned function creation, but monitoring closer there are many
other patterns with LOCK, DROP or CREATE EXTENSION which are involved.
One of the symptoms resulting in combining both is that the session
which used the temporary schema is not able to shut down completely,
waiting for being able to drop the temporary schema, something that it
cannot complete because of the two-phase transaction involved with
temporary objects. In this case the client is able to disconnect but
the session remains alive on the backend-side, potentially blocking
connection backend slots from being used. Other problems reported could
also involve server crashes.
This is back-patched down to v10, which is where 9b013dc has introduced
MyXactFlags, something that this patch relies on.
Reported-by: Alexey Bashtanov
Author: Michael Paquier
Reviewed-by: Masahiko Sawada
Discussion: https://postgr.es/m/5d910e2e-0db8-ec06-dd5f-baec420513c3@imap.cc
Backpatch-through: 10
Previously, parseCheckAggregates was run before
assign_query_collations, but this causes problems if any expression
has already had a collation assigned by some transform function (e.g.
transformCaseExpr) before parseCheckAggregates runs. The differing
collations would cause expressions not to be recognized as equal to
the ones in the GROUP BY clause, leading to spurious errors about
unaggregated column references.
The result was that CASE expr WHEN val ... would fail when "expr"
contained a GROUPING() expression or matched one of the group by
expressions, and where collatable types were involved; whereas the
supposedly identical CASE WHEN expr = val ... would succeed.
Backpatch all the way; this appears to have been wrong ever since
collations were introduced.
Per report from Guillaume Lelarge, analysis and patch by me.
Discussion: https://postgr.es/m/CAECtzeVSO_US8C2Khgfv54ZMUOBR4sWq+6_bLrETnWExHT=rFg@mail.gmail.com
Discussion: https://postgr.es/m/87muo0k0c7.fsf@news-spur.riddles.org.uk
These have been found while cross-checking for the use of unique words
in the documentation, and a wait event was not getting generated in a way
consistent to what the documentation provided.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/9b5a3a85-899a-ae62-dbab-1e7943aa5ab1@gmail.com
We were considering the INCLUDE columns as part of the key, allowing
unicity-violating rows to be inserted in different partitions.
Concurrent development conflict in eb7ed3f30634 and 8224de4f42cc.
Reported-by: Justin Pryzby
Discussion: https://postgr.es/m/20190109065109.GA4285@telsasoft.com
This especially helps braces that surround code blocks. Back-patch to
v11, where commit 56fb890ace8ac0ca955ae0803c580c2074f876f6 first
appeared; before that, settings were even more distant from perltidy.
Reviewed by Andrew Dunstan.
Discussion: https://postgr.es/m/20190103055355.GB267595@gust.leadboat.com
Some makefiles were trying to do this:
temp-install: EXTRA_INSTALL=contrib/test_decoding
but that no longer works as of commit aa019da52: the macro is now
consulted by the checkprep target, one level down, and apparently
gmake doesn't propagate such macro settings recursively.
The problem is masked since 42e61c774 because pgxs.mk also sets up
EXTRA_INSTALL, and correctly applies it to the checkprep target.
Unfortunately I'd not risked back-patching that to before v11.
Since aa019da52 was pushed back to v10, it broke test_decoding
there (the only module for which this actually makes a difference
at present).
Hence, back-patch 42e61c774 to v10. Also, remove some demonstrably
useless settings of EXTRA_INSTALL in v10 and v11 (they'd already
been cleaned up in HEAD).
Per buildfarm.
Discussion: https://postgr.es/m/CAEepm=1pEJdwv6DSGmOfpX0EaX7L7sT28c1nXpqvQvmLfEWb1g@mail.gmail.com
Up to now, createplan.c attempted to share PARAM_EXEC slots for
NestLoopParams across different plan levels, if the same underlying Var
was being fed down to different righthand-side subplan trees by different
NestLoops. This was, I think, more of an artifact of using subselect.c's
PlannerParamItem infrastructure than an explicit design goal, but anyway
that was the end result.
This works well enough as long as the plan tree is executing synchronously,
but the feature whereby Gather can execute the parallelized subplan locally
breaks it. An upper NestLoop node might execute for a row retrieved from
a parallel worker, and assign a value for a PARAM_EXEC slot from that row,
while the leader's copy of the parallelized subplan is suspended with a
different active value of the row the Var comes from. When control
eventually returns to the leader's subplan, it gets the wrong answers if
the same PARAM_EXEC slot is being used within the subplan, as reported
in bug #15577 from Bartosz Polnik.
This is pretty reminiscent of the problem fixed in commit 46c508fbc, and
the proper fix seems to be the same: don't try to share PARAM_EXEC slots
across different levels of controlling NestLoop nodes.
This requires decoupling NestLoopParam handling from PlannerParamItem
handling, although the logic remains somewhat similar. To avoid bizarre
division of labor between subselect.c and createplan.c, I decided to move
all the param-slot-assignment logic for both cases out of those files
and put it into a new file paramassign.c. Hopefully it's a bit better
documented now, too.
A regression test case for this might be nice, but we don't know a
test case that triggers the problem with a suitably small amount
of data.
Back-patch to 9.6 where we added Gather nodes. It's conceivable that
related problems exist in older branches; but without some evidence
for that, I'll leave the older branches alone.
Discussion: https://postgr.es/m/15577-ca61ab18904af852@postgresql.org
Since approximately PostgreSQL 10, it is no longer required that
environment variables at installation time such as PERL, PYTHON, TCLSH
be "full path names", so change that phrasing in the installation
instructions. (The exact time of change appears to differ for PERL
and the others, but it works consistently in PostgreSQL 10.)
Also while we're here document the defaults for PERL and PYTHON, but
since the search list for TCLSH is so long, let's leave that out so we
don't need to maintain a copy of that list in the installation
instructions.
This was an oversight in commit 16828d5c. If the table is going to be
rewritten, we simply clear all the missing values from all the table's
attributes, since there will no longer be any rows with the attributes
missing. Otherwise, we repackage the missing value in an array
constructed with the new type specifications.
Backpatch to release 11.
This fixes bug #15446, reported by Dmitry Molotkov
Reviewed by Dean Rasheed
For a long time, plpgsql has allowed trigger functions to parse
references to OLD and NEW even if the current trigger event type didn't
assign a value to one or the other variable; but actually executing such
a reference would fail. The v11 changes to use "expanded records" for
DTYPE_REC variables changed the behavior so that the unassigned variable
now reads as a null composite value. While this behavioral change was
more or less unintentional, it seems that leaving it like this is better
than adding code and complexity to be bug-compatible with the old way.
The change doesn't break any code that worked before, and it eliminates
a gotcha that often required extra code to work around.
Hence, update the docs to say that these variables are "null" not
"unassigned" when not relevant to the event type. And add a regression
test covering the behavior, so that we'll notice if we ever break it
again.
Per report from Kristjan Tammekivi.
Discussion: https://postgr.es/m/CAABK7uL-uC9ZxKBXzo_68pKt7cECfNRv+c35CXZpjq6jCAzYYA@mail.gmail.com
runtime.sgml said that you couldn't change SysV IPC parameters on OpenBSD
except by rebuilding the kernel. That's definitely wrong in OpenBSD 6.x,
and excavation in their man pages says it changed in OpenBSD 3.3.
Update NetBSD and OpenBSD sections to recommend adjustment of the SEMMNI
and SEMMNS settings, which are painfully small by default on those
platforms. (The discussion thread contemplated recommending that
people select POSIX semaphores instead, but the performance consequences
of that aren't really clear, so I'll refrain.)
Remove pointless discussion of SEMMNU and SEMMAP from the FreeBSD
section. Minor other wordsmithing.
Discussion: https://postgr.es/m/27582.1546928073@sss.pgh.pa.us
This patch changes the rule for whether or not a tuple seen by ANALYZE
should be included in its sample.
When we last touched this logic, in commit 51e1445f1, we weren't
thinking very hard about tuples being UPDATEd by a long-running
concurrent transaction. In such a case, we might see the pre-image as
either LIVE or DELETE_IN_PROGRESS depending on timing; and we might see
the post-image not at all, or as INSERT_IN_PROGRESS. Since the existing
code will not sample either DELETE_IN_PROGRESS or INSERT_IN_PROGRESS
tuples, this leads to concurrently-updated rows being omitted from the
sample entirely. That's not very helpful, and it's especially the wrong
thing if the concurrent transaction ends up rolling back.
The right thing seems to be to sample DELETE_IN_PROGRESS rows just as if
they were live. This makes the "sample it" and "count it" decisions the
same, which seems good for consistency. It's clearly the right thing
if the concurrent transaction ends up rolling back; in effect, we are
sampling as though IN_PROGRESS transactions haven't happened yet.
Also, this combination of choices ensures maximum robustness against
the different combinations of whether and in which state we might see the
pre- and post-images of an update.
It's slightly annoying that we end up recording immediately-out-of-date
stats in the case where the transaction does commit, but on the other
hand the stats are fine for columns that didn't change in the update.
And the alternative of sampling INSERT_IN_PROGRESS rows instead seems
like a bad idea, because then the sampling would be inconsistent with
the way rows are counted for the stats report.
Per report from Mark Chambers; thanks to Jeff Janes for diagnosing
what was happening. Back-patch to all supported versions.
Discussion: https://postgr.es/m/CAFh58O_Myr6G3tcH3gcGrF-=OExB08PJdWZcSBcEcovaiPsrHA@mail.gmail.com
Debian testing and newer now require that RSA and DHE keys are at
least 2048 bit long and no longer allow SHA-1 for signatures in
certificates. This is currently causing the ssl tests to fail there
because the test certificates and keys have been created in violation
of those conditions.
Update the parameters to create the test files and create a new set of
test files.
Author: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>
Reported-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/20180917131340.GE31460%40paquier.xyz
MinMaxExpr invokes the btree comparison function for its input datatype,
so it's only leakproof if that function is. Many such functions are
indeed leakproof, but others are not, and we should not just assume that
they are. Hence, adjust contain_leaked_vars to verify the leakproofness
of the referenced function explicitly.
I didn't add a regression test because it would need to depend on
some particular comparison function being leaky, and that's a moving
target, per discussion.
This has been wrong all along, so back-patch to supported branches.
Discussion: https://postgr.es/m/31042.1546194242@sss.pgh.pa.us
fe0a0b5, which has added a stronger random source in Postgres, has
introduced a thinko when creating a padding message which gets encrypted
for Elgamal. The padding message cannot have zeros, which are replaced
by random bytes. However if pg_strong_random() failed, the message
would finish by being considered in correct shape for encryption with
zeros.
Author: Tom Lane
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/20186.1546188423@sss.pgh.pa.us
Backpatch-through: 10
Detect it the way pg_ctl's wait_for_postmaster() does. When pg_regress
spawned a postmaster that failed startup, we were detecting that only
with "pg_regress: postmaster did not respond within 60 seconds".
Back-patch to 9.4 (all supported versions).
Reviewed by Tom Lane.
Discussion: https://postgr.es/m/20181231172922.GA199150@gust.leadboat.com
POSIX specifies that jrand48() returns a signed 32-bit value (in the
range [-2^31, 2^31)), but our code was returning an unsigned 32-bit
value (in the range [0, 2^32)). This doesn't actually matter to any
existing call site, because they all cast the "long" result to int32
or uint32; but it will doubtless bite somebody in the future.
To fix, cast the arithmetic result to int32 explicitly before the
compiler widens it to long (if widening is needed).
While at it, upgrade this file's far-short-of-project-style comments.
Had there been some peer pressure to document pg_jrand48() properly,
maybe this thinko wouldn't have gotten committed to begin with.
Backpatch to v10 where pg_jrand48() was added, just in case somebody
back-patches a fix that uses it and depends on the standard behavior.
Discussion: https://postgr.es/m/17235.1545951602@sss.pgh.pa.us
Isolation test suite of GIN predicate locking was criticized for being too slow,
especially under Valgrind. This commit is intended to accelerate it. Tests are
simplified in the following ways.
1) Amount of data is reduced. We're now close to the minimal amount of data,
which produces at least one posting tree and at least two pages of entry
tree.
2) Three isolation tests are merged into one.
3) Only one tuple is queried from posting tree. So, locking of index is the
same, but tuple locks are not propagated to relation lock. Also, it is
faster.
4) Test cases itself are simplified. Now each test case run just one INSERT
and one SELECT involving GIN, which either conflict or not.
Discussion: https://postgr.es/m/20181204000740.ok2q53nvkftwu43a%40alap3.anarazel.de
Reported-by: Andres Freund
Tested-by: Andrew Dunstan
Author: Alexander Korotkov
Backpatch-through: 11
According to README we acquire predicate locks on entry tree leafs and posting
tree roots. However, when ginFindLeafPage() is going to lock leaf in exclusive
mode, then it checks root for conflicts regardless whether it's a entry or
posting tree. Assuming that we never place predicate lock on entry tree root
(excluding corner case when root is leaf), this check is redundant. This
commit removes this check. Now, root conflict checking is controlled by
separate argument of ginFindLeafPage().
Discussion: https://postgr.es/m/CAPpHfdv7rrDyy%3DMgsaK-L9kk0AH7az0B-mdC3w3p0FSb9uoyEg%40mail.gmail.com
Author: Alexander Korotkov
Backpatch-through: 11
Inheritance trees can include temporary tables if the parent is
permanent, which makes possible the presence of multiple temporary
children from different sessions. Trying to issue a TRUNCATE on the
parent in this scenario causes a failure, so similarly to any other
queries just ignore such cases, which makes TRUNCATE work
transparently.
This makes truncation behave similarly to any other DML query working on
the parent table with queries which need to be issues on children. A
set of isolation tests is added to cover basic cases.
Reported-by: Zhou Digoal
Author: Amit Langote, Michael Paquier
Discussion: https://postgr.es/m/15565-ce67a48d0244436a@postgresql.org
Backpatch-through: 9.4
I made a frontend fprintf() format use %m, forgetting that that's only
safe in HEAD not the back branches; prior to 96bf88d52 and d6c55de1f,
it would work on glibc platforms but not elsewhere. Revert to using
%s ... strerror(errno) as the code did before.
We could have left HEAD as-is, but for code consistency across branches,
I chose to apply this patch there too.
Per Coverity and a few buildfarm members.
At the end of recovery for the post-promotion process, a new history
file is created followed by the last partial segment of the previous
timeline. Based on the timing, the archiver would first try to archive
the last partial segment and then the history file. This can delay the
detection of a new timeline taken, particularly depending on the time it
takes to transfer the last partial segment as it delays the moment the
history file of the new timeline gets archived. This can cause promoted
standbys to use the same timeline as one already taken depending on the
circumstances if multiple instances look at archives at the same
location.
This commit changes the order of archiving so as history files are
archived in priority over other file types, which reduces the likelihood
of the same timeline being taken (still not reducing the window to
zero), and it makes the archiver behave more consistently with the
startup process doing its post-promotion business.
Author: David Steele
Reviewed-by: Michael Paquier, Kyotaro Horiguchi
Discussion: https://postgr.es/m/929068cf-69e1-bba2-9dc0-e05986aed471@pgmasters.net
Backpatch-through: 9.5
COPY can skip writing WAL when loading data on a table which has been
created in the same transaction as the one loading the data, however
this cannot work on views or foreign table as this would result in
trying to flush relation files which do not exist. So disable the
optimization so as commands are able to work the same way with any
configuration of wal_level.
Tests are added to cover the different cases, which need to have
wal_level set to minimal to allow the problem to show up, and that is
not the default configuration.
Reported-by: Luis M. Carril, Etsuro Fujita
Author: Amit Langote, Michael Paquier
Reviewed-by: Etsuro Fujita
Discussion: https://postgr.es/m/15552-c64aa14c5c22f63c@postgresql.org
Backpatch-through: 10, where support for COPY on views has been added,
while v11 has added support for COPY on foreign tables.
013ebc0a7b implements so-called GiST microvacuum. That is gistgettuple() marks
index tuples as dead when kill_prior_tuple is set. Later, when new tuple
insertion claims page space, those dead index tuples are physically deleted
from page. When this deletion is replayed on standby, it might conflict with
read-only queries. But 013ebc0a7b doesn't handle this. That may lead to
disappearance of some tuples from read-only snapshots on standby.
This commit implements resolving of conflicts between replay of GiST microvacuum
and standby queries. On the master we implement new WAL record type
XLOG_GIST_DELETE, which comprises necessary information. On stable releases
we've to be tricky to keep WAL compatibility. Information required for conflict
processing is just appended to data of XLOG_GIST_PAGE_UPDATE record. So,
PostgreSQL version, which doesn't know about conflict processing, will just
ignore that.
Reported-by: Andres Freund
Diagnosed-by: Andres Freund
Discussion: https://postgr.es/m/20181212224524.scafnlyjindmrbe6%40alap3.anarazel.de
Author: Alexander Korotkov
Backpatch-through: 9.6
For probably bogus reasons, we acquire only AccessShareLock on the
partition when we try to detach it from its parent partitioned table.
This can cause ugly things to happen if another transaction is doing
any sort of DDL to the partition concurrently.
Upgrade that lock to ShareUpdateExclusiveLock, which per discussion
seems to be the minimum needed.
Reported by Robert Haas.
Discussion: https://postgr.es/m/CA+TgmoYruJQ+2qnFLtF1xQtr71pdwgfxy3Ziy-TxV28M6pEmyA@mail.gmail.com