This is a trivial update (consisting in fact only in the addition of
a comment). The point is just to get back to being synced with an
official release of tzcode, rather than some ad-hoc point in their
commit history, which is where commit 1f87181e1 left it.
The existing tests in preprocess_minmax_aggregates() usually prevent it
from trying to do anything with queries containing CTEs, but there's an
exception: a CTE could be present as a member of an appendrel, if we
flattened a UNION ALL that contains CTE references. If it did try to
generate an optimized path for a query using a CTE, it failed with
"could not find plan for CTE", as reported by Torsten Förtsch.
The proximate cause is an unwise decision in commit 3fc6e2d7f to clear
subroot->cte_plan_ids in build_minmax_path(). That left the subroot's
cte_plan_ids list out of step with its parse->cteList.
Removing the "subroot->cte_plan_ids = NIL;" assignment is enough to let
the case work again, but really it's pretty silly to be expending any
cycles at all in this module when there are CTEs: we always treat their
outputs as unordered so there's no way for the optimization to win.
Hence, also add an early-exit test so we don't waste time like that.
Back-patch to 9.6 where the misbehavior was introduced.
Report: https://postgr.es/m/CAKkG4_=gjY5QiHtqSZyWMwDuTd_CftKoTaCqxjJ7uUz1-Gw=qw@mail.gmail.com
Clean up some technical debt left behind by commit 72b1e3a21: instead of
quickly hacking the name of base_yylex() with a #define, set it properly
with "%option prefix". This causes the names of pgc.l's other exported
symbols to change as well, so run around and modify the outside references
to them as needed. Similarly, make pgc.l's external references to
base_yylval use that variable's true name instead of a macro.
The reason for doing this now is that the quick-hack solution will fail
with future versions of flex, as reported by Дилян Палаузов.
Hence, back-patch into 9.6 where the previous commit appeared, since
it's likely people will build 9.6 with newer flex versions during
its lifetime.
Discussion: https://postgr.es/m/d845c1af-e18d-6651-178f-9f08cdf37e10@aegee.org
When ts_rewrite()'s replacement argument is an empty tsquery, it's supposed
to simplify any operator nodes whose operand(s) become NULL; but it failed
to do that reliably, because dropvoidsubtree() only examined the top level
of the result tree. Rather than make a second recursive pass, let's just
give the responsibility to dofindsubquery() to simplify while it's doing
the main replacement pass. Per report from Andreas Seltenreich.
Artur Zakirov, with some cosmetic changes by me. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/8737i01dew.fsf@credativ.de
PLy_generate_spi_exceptions neglected to do Py_INCREF on the new exception
objects, evidently supposing that PyModule_AddObject would do that --- but
it doesn't. This left us in a situation where a Python garbage collection
cycle could result in deletion of exception object(s), causing server
crashes or wrong answers if the exception objects are used later in the
session.
In addition, PLy_generate_spi_exceptions didn't bother to test for
a null result from PyErr_NewException, which at best is inconsistent
with the code in PLy_add_exceptions. And PLy_add_exceptions, while it
did do Py_INCREF on the exceptions it makes, waited to do that till
after some PyModule_AddObject calls, creating a similar risk for
failure if garbage collection happened within those calls.
To fix, refactor to have just one piece of code that creates an
exception object and adds it to the spiexceptions module, bumping the
refcount first.
Also, let's add an additional refcount to represent the pointer we're
going to store in a C global variable or hash table. This should only
matter if the user does something weird like delete the spiexceptions
Python module, but lack of paranoia has caused us enough problems in
PL/Python already.
The fact that PyModule_AddObject doesn't do a Py_INCREF of its own
explains the need for the Py_INCREF added in commit 4c966d920, so we
can improve the comment about that; also, this means we really want
to do that before not after the PyModule_AddObject call.
The missing Py_INCREF in PLy_generate_spi_exceptions was reported and
diagnosed by Rafa de la Torre; the other fixes by me. Back-patch
to all supported branches.
Discussion: https://postgr.es/m/CA+Fz15kR1OXZv43mDrJb3XY+1MuQYWhx5kx3ea6BRKQp6ezGkg@mail.gmail.com
expandRTE() and get_rte_attribute_type() reported the exprType() and
exprTypmod() values of the expressions in the first row of the VALUES as
being the column type/typmod returned by the VALUES RTE. That's fine for
the data type, since we coerce all expressions in a column to have the same
common type. But we don't coerce them to have a common typmod, so it was
possible for rows after the first one to return values that violate the
claimed column typmod. This leads to the incorrect result seen in bug
#14448 from Hassan Mahmood, as well as some other corner-case misbehaviors.
The desired behavior is the same as we use in other type-unification
cases: report the common typmod if there is one, but otherwise return -1
indicating no particular constraint.
We fixed this in HEAD by deriving the typmods during transformValuesClause
and storing them in the RTE, but that's not a feasible solution in the back
branches. Instead, just use a brute-force approach of determining the
correct common typmod during expandRTE() and get_rte_attribute_type().
Simple testing says that that doesn't really cost much, at least not in
common cases where expandRTE() is only used once per query. It turns out
that get_rte_attribute_type() is typically never used at all on VALUES
RTEs, so the inefficiency there is of no great concern.
Report: https://postgr.es/m/20161205143037.4377.60754@wrigleys.postgresql.org
Discussion: https://postgr.es/m/27429.1480968538@sss.pgh.pa.us
array_position and its cousin array_positions were caching the element
type equality function's FmgrInfo without being careful enough to put it
in a long-lived context. This is obviously broken but it didn't matter
in most cases; only when using arrays of records (involving record_eq)
it becomes a problem. The fix is to ensure that the type's equality
function's FmgrInfo is cached in the array_position's flinfo->fn_mcxt
rather than the current memory context.
Apart from record types, the only other case that seems complex enough
to possibly cause the same problem are range types. I didn't find a way
to reproduce the problem with those, so I only include the test case
submitted with the bug report as regression test.
Bug report and patch: Junseok Yang
Discussion: https://postgr.es/m/CAE+byMupUURYiZ6bKYgMZb9pgV1CYAijJGqWj-90W=nS7uEOeA@mail.gmail.com
Backpatch to 9.5, where array_position appeared.
Commit 4212cb73262bbdd164727beffa4c4744b4ead92d rendered a comment
in execMain.c incorrect. Per complaint from Tom Lane, repair.
Patch from Amit Kapila, per wording suggested by Tom Lane and me.
Previously, it was thought that this only needed to be done for the
benefit of possible standbys, so wal_level = minimal skipped it.
But that's not safe, because during crash recovery we might replay
XLOG_DBASE_CREATE or XLOG_TBLSPC_CREATE record which recursively
removes the directory that contains the new init fork. So log it
always.
The user-visible effect of this bug is that if you create a database
or tablespace, then create an unlogged table, then crash without
checkpointing, then restart, accessing the table will fail, because
the it won't have been properly reset. This commit fixes that.
Michael Paquier, per a report from Konstantin Knizhnik. Wording of
the comments per a suggestion from me.
Ancient oversight in PageOutput(): if popen() fails, we'd better reset
the SIGPIPE handler before returning stdout, because ClosePager() won't.
Noticed while fixing the empty-PAGER issue.
If the PAGER environment variable is set but contains an empty string,
psql would pass it to "sh" which would silently exit, causing whatever
query output we were printing to vanish entirely. This is quite
mystifying; it took a long time for us to figure out that this was the
cause of Joseph Brenner's trouble report. Rather than allowing that
to happen, we should treat this as another way to specify "no pager".
(We could alternatively treat it as selecting the default pager, but
it seems more likely that the former is what the user meant to achieve
by setting PAGER this way.)
Nonempty, but all-white-space, PAGER values have the same behavior, and
it's pretty easy to test for that, so let's handle that case the same way.
Most other cases of faulty PAGER values will result in the shell printing
some kind of complaint to stderr, which should be enough to diagnose the
problem, so we don't need to work harder than this. (Note that there's
been an intentional decision not to be very chatty about apparent failure
returns from the pager process, since that may happen if, eg, the user
quits the pager with control-C or some such. I'd just as soon not start
splitting hairs about which exit codes might merit making our own report.)
libpq's old PQprint() function was already on board with ignoring empty
PAGER values, but for consistency, make it ignore all-white-space values
as well.
It's been like this a long time, so back-patch to all supported branches.
Discussion: https://postgr.es/m/CAFfgvXWLOE2novHzYjmQK8-J6TmHz42G8f3X0SORM44+stUGmw@mail.gmail.com
It typically is a "long", but it seems possible that on some platforms
it wouldn't be. In any case, this silences a compiler warning on
OpenBSD (cf buildfarm member curculio).
While at it, use snprintf not sprintf. This format string couldn't
possibly overrun the supplied buffer, but that doesn't seem like
a good reason not to use the safer style.
Oversight in commit f828654e1. Back-patch to 9.6 where that came in.
Previously, a prepared statement created via a Parse message could get
a parallel plan, but one created with a PREPARE statement could not.
This state of affairs was due to confusion on my (rhaas) part: I
erroneously believed that a CREATE TABLE .. AS EXECUTE statement could
only be performed with a prepared statement by PREPARE, but in fact
one created by a Prepare message works just as well. Therefore, it
makes no sense to allow parallel query in one case but not the other.
To fix, allow parallel query with all prepared statements, but run
the parallel plan serially (i.e. without workers) in the case of
CREATE TABLE .. AS EXECUTE. Also, document this.
Amit Kapila and Tobias Bussman, plus an extra sentence of
documentation by me.
This reverts commit 4e01ecae98275298c680c92fdba62daf603dc98e.
Per Tom Lane, changing the definition of StringInfoData amounts to an
ABI break, which is unacceptable in back branches.
The previously code worked OK as long as a Gather node was never
rescanned, or if it was rescanned, as long as it got at least as
many workers on rescan as it had originally. But if the number
of workers ever decreased on a rescan, then it could crash.
Andreas Seltenreich
Previously gin_desc() displayed incorrect output "unknown action 0"
for XLOG_GIN_INSERT and XLOG_GIN_VACUUM_DATA_LEAF_PAGE records with
valid actions. The cause of this problem was that gin_desc() wrongly
used XLogRecGetData() to extract data from those records.
Since they were registered by XLogRegisterBufData(), gin_desc() should
have used XLogRecGetBlockData(), instead, like gin_redo().
Also there were other differences about how to treat XLOG_GIN_INSERT
record between gin_desc() and gin_redo().
This commit fixes gin_desc() routine so that it treats those records
in the same way as gin_redo().
Batch-patch to 9.5 where WAL record format was revamped and
XLogRegisterBufData() was added.
Reported-By: Andres Freund
Reviewed-By: Tom Lane
Discussion: <20160509194645.7lewnpw647zegx2m@alap3.anarazel.de>
transformOnConflictClause incremented p_next_resno while generating the
phony targetlist for the EXCLUDED pseudo-rel. Then that field got
incremented some more during transformTargetList, possibly leading to
free_parsestate concluding that we'd overrun the allowed length of a tlist,
as reported by Justin Pryzby.
We could fix this by resetting p_next_resno to 1 after using it for the
EXCLUDED pseudo-rel tlist, but it seems easier and less coupled to other
places if we just don't use that field at all in this loop. (Note that
this doesn't change anything about the resnos that end up appearing in
the main target list, because those are all replaced with target-column
numbers by updateTargetListEntry.)
In passing, fix incorrect type OID assigned to the whole-row Var for
"EXCLUDED.*" (somehow this escaped having any bad consequences so far,
but it's certainly wrong); remove useless assignment to var->location;
pstrdup the column names in case of a relcache flush; and improve
nearby comments.
Back-patch to 9.5 where ON CONFLICT was introduced.
Report: https://postgr.es/m/20161204163237.GA8030@telsasoft.com
This has no effect in the most conventional case, where no relevant DLL
uses a debug build. For an example where it does matter, given a debug
build of MIT Kerberos, the krb_server_keyfile parameter usually had no
effect. Since nobody wants a Heisenbug, back-patch to 9.2 (all
supported versions).
Christian Ullrich, reviewed by Michael Paquier.
In accordance with its own documentation, invoke CloseHandle() only when
directed in the documentation for the function that furnished the
handle. GetModuleHandle() does not so direct. We have been issuing
this call only in the rare event that a CRT DLL contains no "_putenv"
symbol, so lack of bug reports is uninformative. Back-patch to 9.2 (all
supported versions).
Christian Ullrich, reviewed by Michael Paquier.
Replace use of plain 0 as a null pointer constant. In comments, update
terminology and lessen redundancy. Back-patch to 9.2 (all supported
versions) for the convenience of back-patching the next two commits.
Christian Ullrich and Noah Misch, reviewed (in earlier versions) by
Michael Paquier.
Our documentation states that our maximum field size is 1 GB, and that
our maximum row size of 1.6 TB. However, while this might be attainable
in theory with enough contortions, it is not workable in practice; for
starters, pg_dump fails to dump tables containing rows larger than 1 GB,
even if individual columns are well below the limit; and even if one
does manage to manufacture a dump file containing a row that large, the
server refuses to load it anyway.
This commit enables dumping and reloading of such tuples, provided two
conditions are met:
1. no single column is larger than 1 GB (in output size -- for bytea
this includes the formatting overhead)
2. the whole row is not larger than 2 GB
There are three related changes to enable this:
a. StringInfo's API now has two additional functions that allow creating
a string that grows beyond the typical 1GB limit (and "long" string).
ABI compatibility is maintained. We still limit these strings to 2 GB,
though, for reasons explained below.
b. COPY now uses long StringInfos, so that pg_dump doesn't choke
trying to emit rows longer than 1GB.
c. heap_form_tuple now uses the MCXT_ALLOW_HUGE flag in its allocation
for the input tuple, which means that large tuples are accepted on
input. Note that at this point we do not apply any further limit to the
input tuple size.
The main reason to limit to 2 GB is that the FE/BE protocol uses 32 bit
length words to describe each row; and because the documentation is
ambiguous on its signedness and libpq does consider it signed, we cannot
use the highest-order bit. Additionally, the StringInfo API uses "int"
(which is 4 bytes wide in most platforms) in many places, so we'd need
to change that API too in order to improve, which has lots of fallout.
Backpatch to 9.5, which is the oldest that has
MemoryContextAllocExtended, a necessary piece of infrastructure. We
could apply to 9.4 with very minimal additional effort, but any further
than that would require backpatching "huge" allocations too.
This is the largest set of changes we could find that can be
back-patched without breaking compatibility with existing systems.
Fixing a bigger set of problems (for example, dumping tuples bigger than
2GB, or dumping fields bigger than 1GB) would require changing the FE/BE
protocol and/or changing the StringInfo API in an ABI-incompatible way,
neither of which would be back-patchable.
Authors: Daniel Vérité, Álvaro Herrera
Reviewed by: Tomas Vondra
Discussion: https://postgr.es/m/20160229183023.GA286012@alvherre.pgsql
Per bug #14441 from Mark Pether, the documentation could be misread,
mainly because some of the examples failed to show what happens with
a multicharacter "characters to trim" string. Also, while the text
description in most of these entries was fairly clear that the
"characters" argument is a set of characters not a substring to match,
some of them used variant wording that was a bit less clear.
trim() itself suffered from both deficiencies and was thus pretty
misinterpretable.
Also fix failure to explain which of LEADING/TRAILING/BOTH is the
default.
Discussion: https://postgr.es/m/20161130011710.6539.53657@wrigleys.postgresql.org
consider_parallel_nestloop passed the wrong jointype down to its
subroutines for JOIN_UNIQUE_INNER cases (it should pass JOIN_INNER), and it
thought that it could pass paths other than innerrel->cheapest_total_path
to create_unique_path, which create_unique_path is not on board with.
These bugs would lead to assertion failures or other errors, suggesting
that this code path hasn't been tested much.
hash_inner_and_outer's code for parallel join effectively treated both
JOIN_UNIQUE_OUTER and JOIN_UNIQUE_INNER the same as JOIN_INNER (for
different reasons :-(), leading to incorrect plans that treated a semijoin
as if it were a plain join.
Michael Day submitted a test case demonstrating that hash_inner_and_outer
failed for JOIN_UNIQUE_OUTER, and I found the other cases through code
review.
Report: https://postgr.es/m/D0E8A029-D1AC-42E8-979A-5DE4A77E4413@rcmail.com
The documentation around the -b/--blobs option to pg_dump seemed to
imply that it might be possible to add blobs to a "schema-only" dump or
similar. Clarify that blobs are data and therefore will only be
included in dumps where data is being included, even when -b is used to
request blobs be included.
The -b option has been around since before 9.2, so back-patch to all
supported branches.
Discussion: https://postgr.es/m/20161119173316.GA13284@tamriel.snowman.net
An example in the psql documentation had an incorrect field name from
what the command actually produced.
Pointed out by Fabien COELHO
Back-patch to 9.6 where the example was added.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1611291349400.19314@lancre
Commit 08dd23cec introduced an exception to the rule that extension member
objects can only be dropped as part of dropping the whole extension,
intending to allow such drops while running the extension's own creation or
update scripts. However, the exception was only applied at the outermost
recursion level, because it was modeled on a pre-existing check to ignore
dependencies on objects listed in pendingObjects. Bug #14434 from Philippe
Beaudoin shows that this is inadequate: in some cases we can reach an
extension member object by recursion from another one. (The bug concerns
the serial-sequence case; I'm not sure if there are other cases, but there
might well be.)
To fix, revert 08dd23cec's changes to findDependentObjects() and instead
apply the creating_extension exception regardless of stack level.
Having seen this example, I'm a bit suspicious that the pendingObjects
logic is also wrong and such cases should likewise be allowed at any
recursion level. However, changing that would interact in subtle ways
with the recursion logic (at least it would need to be moved to after the
recursing-from check). Given that the code's been like that a long time,
I'll refrain from touching it without a clear example showing it's wrong.
Back-patch to all active branches. In HEAD and 9.6, where suitable
test infrastructure exists, add a regression test case based on the
bug report.
Report: <20161125151448.6529.33039@wrigleys.postgresql.org>
Discussion: <13224.1480177514@sss.pgh.pa.us>
Instead of confusingly stating platform-dependent defaults for these
parameters in the comments in postgresql.conf.sample (with the main
entry being a lie on Linux), teach initdb to install the correct
platform-dependent value in postgresql.conf, similarly to the way
we handle other platform-dependent defaults. This won't do anything
for existing 9.6 installations, but since it's effectively only a
documentation improvement, that seems OK.
Since this requires initdb to have access to the default values,
move the #define's for those to pg_config_manual.h; the original
placement in bufmgr.h is unworkable because that file can't be
included by frontend programs.
Adjust the default value for wal_writer_flush_after so that it is 1MB
regardless of XLOG_BLCKSZ, conforming to what is stated in both the
SGML docs and postgresql.conf. (We could alternatively make it scale
with XLOG_BLCKSZ, but I'm not sure I see the point.)
Copy-edit related SGML documentation.
Fabien Coelho and Tom Lane, per a gripe from Tomas Vondra.
Discussion: <30ebc6e3-8358-09cf-44a8-578252938424@2ndquadrant.com>
Andreas Seltenreich found another case where we were being too optimistic
about allowing a plan to be considered parallelizable despite it containing
initPlans. It seems like the real issue here is that if we know we are
going to tack initPlans onto the topmost Plan node for a subquery, we
had better mark that subquery's result Paths as not-parallel-safe. That
fixes this problem and allows reversion of a kluge (added in commit
7b67a0a49 and extended in f24cf960d) to not trust the parallel_safe flag
at top level.
Discussion: <874m2w4k5d.fsf@ex.ansel.ydns.eu>
When dropping a foreign key constraint with ALTER TABLE DROP CONSTRAINT,
we refuse the drop if there are any pending trigger events on the named
table; this ensures that we won't remove the pg_trigger row that will be
consulted by those events. But we should make the same check for the
referenced relation, else we might remove a due-to-be-referenced pg_trigger
row for that relation too, resulting in "could not find trigger NNN" or
"relation NNN has no triggers" errors at commit. Per bug #14431 from
Benjie Gillam. Back-patch to all supported branches.
Report: <20161124114911.6530.31200@wrigleys.postgresql.org>
Previously, requesting commit timestamp for transactions
FrozenTransactionId and BootstrapTransactionId resulted in an error.
But since those values can validly appear in committed tuples' Xmin,
this behavior is unhelpful and error prone: each caller would have to
special-case those values before requesting timestamp data for an Xid.
We already have a perfectly good interface for returning "the Xid you
requested is too old for us to have commit TS data for it", so let's use
that instead.
Backpatch to 9.5, where commit timestamps appeared.
Author: Craig Ringer
Discussion: https://www.postgresql.org/message-id/CAMsr+YFM5Q=+ry3mKvWEqRTxrB0iU3qUSRnS28nz6FJYtBwhJg@mail.gmail.com
When rebuilding an existing index, ALTER TABLE correctly kept the
physical file in the same tablespace, but it messed up the pg_class
entry if the index had been in the database's default tablespace
and "default_tablespace" was set to some non-default tablespace.
This led to an inaccessible index.
Fix by fixing pg_get_indexdef_string() to always include a tablespace
clause, whether or not the index is in the default tablespace. The
previous behavior was installed in commit 537e92e41, and I think it just
wasn't thought through very clearly; certainly the possible effect of
default_tablespace wasn't considered. There's some risk in changing the
behavior of this function, but there are no other call sites in the core
code. Even if it's being used by some third party extension, it's fairly
hard to envision a usage that is okay with a tablespace clause being
appended some of the time but can't handle it being appended all the time.
Back-patch to all supported versions.
Code fix by me, investigation and test cases by Michael Paquier.
Discussion: <1479294998857-5930602.post@n3.nabble.com>
Before commit 906bfcad7, we were not actually processing the righthand
side of a multiple-column assignment in UPDATE as a row constructor:
it was just a parenthesized list of expressions. Call it that rather
than risking confusion by people who would expect the documented behaviors
of row constructors to apply.
Back-patch to 9.5; before that, the text correctly described the construct
as a "list of independent expressions".
Discussion: <16288.1479610770@sss.pgh.pa.us>
Create a section specifically for the syntactic rules around whole-row
variable usage, such as expansion of "foo.*". This was previously
documented only haphazardly, with some critical info buried in
unexpected places like xfunc-sql-composite-functions. Per repeated
questions in different mailing lists.
Discussion: <16288.1479610770@sss.pgh.pa.us>
There are assorted references to RETURNING in Part II, but nothing
that would qualify as an explanation of the feature, which seems
like an oversight considering how useful it is. Add something.
Noted while looking for a place to point a cross-reference to ...
A little COLLATE "C" goes a long way.
Pavel Stehule, per suggestion from Craig Ringer
Discussion: <CAFj8pRA8nJZcozgxN=RMSqMmKuHVOkcGAAKPKdFeiMWGDSUDLA@mail.gmail.com>
The code was intentionally not very careful about leaking strdup'd
strings in case of an error. That was forgivable probably, but it
also failed to notice strdup() failures, which could lead to subsequent
null-pointer-dereference crashes, since many callers unsurprisingly
didn't check for null pointers in the struct lconv fields. An even
worse problem is that it could throw error while we were setlocale'd
to a non-C locale, causing unwanted behavior in subsequent libc calls.
Rewrite to ensure that we cannot throw elog(ERROR) until after we've
restored the previous locale settings, or at least attempted to.
(I'm sorely tempted to make restore failure be a FATAL error, but
will refrain for the moment.) Having done that, it's not much more
work to ensure that we clean up strdup'd storage on the way out, too.
This code is substantially the same in all supported branches, so
back-patch all the way.
Michael Paquier and Tom Lane
Discussion: <CAB7nPqRMbGqa_mesopcn4MPyTs34eqtVEK7ELYxvvV=oqS00YA@mail.gmail.com>
We mustn't force parallel mode if the query has any subplans, since
ExecSerializePlan doesn't transmit them to workers. Testing
top_plan->initPlan is inadequate because (1) there might be initPlans
attached to lower plan nodes, and (2) non-initPlan subplans don't
work either. There's certainly room for improvement in those
restrictions, but for the moment that's what we've got.
Amit Kapila, per report from Andreas Seltenreich
Discussion: <8737im6pmh.fsf@credativ.de>
Because we use transformTargetList() for UPDATE as well as SELECT
tlists, the code accidentally tried to expand a "*" reference into
several columns. This is nonsensical, because the UPDATE syntax
provides exactly one target column to put the value into. The
immediate result was that transformUpdateTargetList() got confused
and reported "UPDATE target count mismatch --- internal error".
It seems better to treat such a reference as a plain whole-row
variable, as it would be in other contexts. (This could produce
useful results when the target column is of composite type.)
Fix by tweaking transformTargetList() to perform *-expansion only
conditionally, depending on its exprKind parameter.
Back-patch to 9.3. The problem exists further back, but a fix would be
much more invasive before that, because transformTargetList() wasn't
told what kind of list it was working on. Doesn't seem worth the
trouble given the lack of field reports. (I only noticed it because
I was checking the code while trying to improve the documentation about
how we handle "foo.*".)
Discussion: <4308.1479595330@sss.pgh.pa.us>
The serialization code dumped core for a string-valued GUC whose value
is NULL, which is a legal state. The infrastructure isn't capable of
transmitting that state exactly, but fortunately, transmitting an empty
string instead should be close enough (compare, eg, commit e45e990e4).
The code potentially underestimated the space required to format a
real-valued variable, both because it made an unwarranted assumption that
%g output would never be longer than %e output, and because it didn't count
right even for %e format. In practice this would pretty much always be
masked by overestimates for other variables, but it's still wrong.
Also fix boundary-case error in read_gucstate, incorrect handling of the
case where guc_sourcefile is non-NULL but zero length (not clear that can
happen, but if it did, this code would get totally confused), and
confusingly useless check for a NULL result from read_gucstate.
Andreas Seltenreich discovered the core dump; other issues noted while
reading nearby code. Back-patch to 9.5 where this code was introduced.
Michael Paquier and Tom Lane
Discussion: <871sy78wno.fsf@credativ.de>
Teach it not to complain if the dropStmt attached to an archive entry
is actually spelled CREATE OR REPLACE VIEW, since that will happen due to
an upcoming bug fix. Also, if it doesn't recognize a dropStmt, have it
print a WARNING and then emit the dropStmt unmodified. That seems like a
much saner behavior than Assert'ing or dumping core due to a null-pointer
dereference, which is what would happen before :-(.
Back-patch to 9.4 where this option was introduced.
Discussion: <19092.1479325184@sss.pgh.pa.us>
Replay of XLOG_BTREE_VACUUM during Hot Standby was previously thought to
require complex interlocking that matched the requirements on the
master. This required an O(N) operation that became a significant
problem with large indexes, causing replication delays of seconds or in
some cases minutes while the XLOG_BTREE_VACUUM was replayed.
This commit skips the “pin scan” that was previously required, by
observing in detail when and how it is safe to do so, with full
documentation. The pin scan is skipped only in replay; the VACUUM code
path on master is not touched here.
No tests included. Manual tests using an additional patch to view WAL records
and their timing have shown the change in WAL records and their handling has
successfully reduced replication delay.
This is a back-patch of commits 687f2cd7a015, 3e4b7d87988f, b60284261375
by Simon Riggs, to branches 9.4 and 9.5. No further backpatch is
possible because this depends on catalog scans being MVCC. I (Álvaro)
additionally updated a slight problem in the README, which explains why
this touches the 9.6 and master branches.
On Windows, libc will mask \r\n line endings for us, since we read the
password file in text mode. But that doesn't happen on Unix. People
who share password files across both systems might have \r\n line endings
in a file they use on Unix, so as a convenience, ignore trailing \r.
Per gripe from Josh Berkus.
In passing, put the existing check for empty line somewhere where it's
actually useful, ie after stripping the newline not before.
Vik Fearing, adjusted a bit by me
Discussion: <0de37763-5843-b2cc-855e-5d0e5df25807@agliodbs.com>
The CatalogSnapshot was not plugged into SnapshotResetXmin()'s accounting
for whether MyPgXact->xmin could be cleared or advanced. In normal
transactions this was masked by the fact that the transaction snapshot
would be older, but during backend startup and certain utility commands
it was possible to re-use the CatalogSnapshot after MyPgXact->xmin had
been cleared, meaning that recently-deleted rows could be pruned even
though this snapshot could still see them, causing unexpected catalog
lookup failures. This effect appears to be the explanation for a recent
failure on buildfarm member piculet.
To fix, add the CatalogSnapshot to the RegisteredSnapshots heap whenever
it is valid.
In the previous logic, it was possible for the CatalogSnapshot to remain
valid across waits for client input, but with this change that would mean
it delays advance of global xmin in cases where it did not before. To
avoid possibly causing new table-bloat problems with clients that sit idle
for long intervals, add code to invalidate the CatalogSnapshot before
waiting for client input. (When the backend is busy, it's unlikely that
the CatalogSnapshot would be the oldest snap for very long, so we don't
worry about forcing early invalidation of it otherwise.)
In passing, remove the CatalogSnapshotStale flag in favor of using
"CatalogSnapshot != NULL" to represent validity, as we do for the other
special snapshots in snapmgr.c. And improve some obsolete comments.
No regression test because I don't know a deterministic way to cause this
failure. But the stress test shown in the original discussion provokes
"cache lookup failed for relation 1255" within a few dozen seconds for me.
Back-patch to 9.4 where MVCC catalog scans were introduced. (Note: it's
quite easy to produce similar failures with the same test case in branches
before 9.4. But MVCC catalog scans were supposed to fix that.)
Discussion: <16447.1478818294@sss.pgh.pa.us>