Having the script prompt for passwords over and over was a preexisting
problem when it processed multiple databases or when it processed
multiple analyze stages, but the parallel mode introduced in commit
a179232047 made it worse.
Fix the annoyance by keeping a copy of the password used by the first
connection that requires one. Since users can (currently) only have a
single password, there's no need for more complex arrangements (such as
remembering one password per database).
Per bug #13741 reported by Eric Brown. Patch authored and
cross-reviewed by Haribabu Kommi and Michael Paquier, slightly tweaked
by Álvaro Herrera.
Discussion: http://www.postgresql.org/message-id/20151027193919.931.54948@wrigleys.postgresql.org
Backpatch to 9.5, where parallel vacuumdb was introduced.
In commit 210eb9b743c0645d I centralized libpq's logic for closing down
the backend communication socket, and made the new pqDropConnection
routine always reset the I/O buffers to empty. Many of the call sites
previously had not had such code, and while that amounted to an oversight
in some cases, there was one place where it was intentional and necessary
*not* to flush the input buffer: pqReadData should never cause that to
happen, since we probably still want to process whatever data we read.
This is the true cause of the problem Robert was attempting to fix in
c3e7c24a1d60dc6a, namely that libpq no longer reported the backend's final
ERROR message before reporting "server closed the connection unexpectedly".
But that only accidentally fixed it, by invoking parseInput before the
input buffer got flushed; and very likely there are timing scenarios
where we'd still lose the message before processing it.
To fix, pass a flag to pqDropConnection to tell it whether to flush the
input buffer or not. On review I think flushing is actually correct for
every other call site.
Back-patch to 9.3 where the problem was introduced. In HEAD, also improve
the comments added by c3e7c24a1d60dc6a.
Also fill in the previously empty "major enhancements" list. YMMV as to
which items should make the cut, but it's past time we had something more
than a placeholder here.
(I meant to get this done before beta2 was wrapped, but got distracted by
PDF build problems. Better late than never.)
These were discussed in three different sections of the manual, which
unsurprisingly had diverged over time; and the descriptions of individual
variables lacked stylistic consistency even within each section (and
frequently weren't in very good English anyway). Clean up the mess, and
remove some of the redundant information in hopes that future additions
will be less likely to re-introduce inconsistency. For instance I see
no need for maintenance.sgml to include its very own list of all the
autovacuum storage parameters, especially since that list was already
incomplete.
Commit 8457d0beca731bf0 introduced an example which, while not incorrect,
failed to exhibit the behavior it meant to describe, as a result of omitting
an E'' prefix that needed to be there. Noticed and fixed by Peter Geoghegan.
I (tgl) failed to resist the temptation to wordsmith nearby text a bit
while at it.
In commit a5ec86a7c787832d28d5e50400ec96a5190f2555 I wrote a quick hack
that reduced the number of TeX string pool entries created while converting
our documentation to PDF form. That held the fort for awhile, but as of
HEAD we're back up against the same limitation. It turns out that the
original coding of \FlowObjectSetup actually results in *three* string pool
entries being generated for every "flow object" (that is, potential
cross-reference target) in the documentation, and my previous hack only got
rid of one of them. With a little more care, we can reduce the string
count to one per flow object plus one per actually-cross-referenced flow
object (about 115000 + 5000 as of current HEAD); that should work until
the documentation volume roughly doubles from where it is today.
As a not-incidental side benefit, this change also causes pdfjadetex to
stop emitting unreferenced hyperlink anchors (bookmarks) into the PDF file.
It had been making one willy-nilly for every flow object; now it's just one
per actually-cross-referenced object. This results in close to a 2X
savings in PDF file size. We will still want to run the output through
"jpdftweak" to get it to be compressed; but we no longer need removal of
unreferenced bookmarks, so we might be able to find a quicker tool for
that step.
Although the failure only affects HEAD and US-format output at the moment,
9.5 cannot be more than a few pages short of failing likewise, so it
will inevitably fail after a few rounds of minor-version release notes.
I don't have a lot of faith that we'll never hit the limit in the older
branches; and anyway it would be nice to get rid of jpdftweak across the
board. Therefore, back-patch to all supported branches.
By accident the replication origin was not set properly in
DecodeCommit(). That's bad because the origin is passed to the output
plugins origin filter, and accessible from the output plugin via
ReorderBufferTXN->origin_id. Accessing the origin of individual changes
worked before the fix, which is why this wasn't notices earlier.
Reported-By: Craig Ringer
Author: Craig Ringer
Discussion: CAMsr+YFhBJLp=qfSz3-J+0P1zLkE8zNXM2otycn20QRMx380gw@mail.gmail.com
Backpatch: 9.5, where replication origins where introduced
At least OpenBSD, NetBSD, and Windows don't support it. This repairs
pg_ctl for listen_addresses='0.0.0.0' and listen_addresses='::'. Since
pg_ctl prefers to test a Unix-domain socket, Windows users are most
likely to need this change. Back-patch to 9.1 (all supported versions).
This could change pg_ctl interaction with loopback-interface firewall
rules. Therefore, in 9.4 and earlier (released branches), activate the
change only on known-affected platforms.
Reported (bug #13611) and designed by Kondo Yuta.
Per discussion, the original name was a bit misleading, and
PQsslAttributeNames() seems more apropos. It's not quite too late to
change this in 9.5, so let's change it while we can.
Also, make sure that the pointer array is const, not only the pointed-to
strings.
Minor documentation wordsmithing while at it.
Lars Kanis, slight adjustments by me
Lookahead and lookbehind constraints aren't allowed to contain backrefs,
and parentheses within them are always considered non-capturing. Or so
says the manual. But the regexp parser forgot about these rules once
inside a parenthesized subexpression, so that constructs like (\w)(?=(\1))
were accepted (but then not correctly executed --- a case like this acted
like (\w)(?=\w), without any enforcement that the two \w's match the same
text). And in (?=((foo))) the innermost parentheses would be counted as
capturing parentheses, though no text would ever be captured for them.
To fix, properly pass down the "type" argument to the recursive invocation
of parse().
Back-patch to all supported branches; it was agreed that silent
misexecution of such patterns is worse than throwing an error, even though
new errors in minor releases are generally not desirable.
With include_realm=1 being set down in parse_hba_auth_opt, if multiple
options are passed on the pg_hba line, such as:
host all all 0.0.0.0/0 gss include_realm=0 krb_realm=XYZ.COM
We would mistakenly reset include_realm back to 1. Instead, we need to
set include_realm=1 up in parse_hba_line, prior to parsing any of the
additional options.
Discovered by Jeff McCormick during testing.
Bug introduced by 9a08841.
Back-patch to 9.5
The jsonb_path_ops code calculated hash values inconsistently in some cases
involving nested arrays and objects. This would result in queries possibly
not finding entries that they should find, when using a jsonb_path_ops GIN
index for the search. The problem cases involve JSONB values that contain
both scalars and sub-objects at the same nesting level, for example an
array containing both scalars and sub-arrays. To fix, reset the current
stack->hash after processing each value or sub-object, not before; and
don't try to be cute about the outermost level's initial hash.
Correcting this means that existing jsonb_path_ops indexes may now be
inconsistent with the new hash calculation code. The symptom is the same
--- searches not finding entries they should find --- but the specific
rows affected are likely to be different. Users will need to REINDEX
jsonb_path_ops indexes to make sure that all searches work as expected.
Per bug #13756 from Daniel Cheng. Back-patch to 9.4 where the faulty
logic was introduced.
Up until now, the total amount of data that could be passed to a
background worker at startup was one datum, which can be a small as
4 bytes on some systems. That's enough to pass a dsm_handle or an
array index, but not much else. Add a bgw_extra flag to the
BackgroundWorker struct, allowing up to 128 bytes to be passed to
a new worker on any platform.
Use this to fix a problem I recently discovered with the parallel
context machinery added in 9.5: the master assigns each worker an
array index, and each worker subsequently assigns itself an array
index, and there's nothing to guarantee that the two sets of indexes
match, leading to chaos.
Normally, I would not back-patch the change to add bgw_extra, since it
is basically a feature addition. However, since 9.5 is still in beta
and there seems to be no other sensible way to repair the broken
parallel context machinery, back-patch to 9.5. Existing background
worker code can ignore the bgw_extra field without a problem, but
might need to be recompiled since the structure size has changed.
Report and patch by me. Review by Amit Kapila.
Standard-conforming literals have been the default for long enough that
it no longer seems necessary to go out of our way to tell people to write
regex escapes illegibly.
Fix some brain fade in commit a2dabf0e1dda93c8: erroneous variable names
in docs, rearrangements that made sentences less clear not more so,
undocumented and poorly-chosen-anyway API behaviors of subroutines,
bad grammar in error messages, copy-and-paste faults.
Albe Laurenz and Tom Lane
Commit a1480ec1d3bacb9acb08ec09f22bc25bc033115b purported to fix the
problems with commit b2ccb5f4e6c81305386edb34daf7d1d1e1ee112a, but it
didn't completely fix them. The problem is that the checks were
performed in the wrong order, leading to a race condition. If the
sender attached, sent a message, and detached after the receiver
called shm_mq_get_sender and before the receiver called
shm_mq_counterparty_gone, we'd incorrectly return SHM_MQ_DETACHED
before all messages were read. Repair by reversing the order of
operations, and add a long comment explaining why this new logic is
(hopefully) correct.
On insert the CheckForSerializableConflictIn() test was performed
before the page(s) which were going to be modified had been locked
(with an exclusive buffer content lock). If another process
acquired a relation SIReadLock on the heap and scanned to a page on
which an insert was going to occur before the page was so locked,
a rw-conflict would be missed, which could allow a serialization
anomaly to be missed. The window between the check and the page
lock was small, so the bug was generally not noticed unless there
was high concurrency with multiple processes inserting into the
same table.
This was reported by Peter Bailis as bug #11732, by Sean Chittenden
as bug #13667, and by others.
The race condition was eliminated in heap_insert() by moving the
check down below the acquisition of the buffer lock, which had been
the very next statement. Because of the loop locking and unlocking
multiple buffers in heap_multi_insert() a check was added after all
inserts were completed. The check before the start of the inserts
was left because it might avoid a large amount of work to detect a
serialization anomaly before performing the all of the inserts and
the related WAL logging.
While investigating this bug, other SSI bugs which were even harder
to hit in practice were noticed and fixed, an unnecessary check
(covered by another check, so redundant) was removed from
heap_update(), and comments were improved.
Back-patch to all supported branches.
Kevin Grittner and Thomas Munro
Show how this can be used in practice to make queries simpler and more
flexible. Also, draw an explicit contrast to the existence operator,
which doesn't work that way.
Peter Geoghegan and Tom Lane
A bug in the original free space computation made it possible to
return a page which wasn't actually able to fit the item. Since the
insertion code isn't prepared to deal with PageAddItem failing, a PANIC
resulted ("failed to add BRIN tuple [to new page]"). Add a macro to
encapsulate the correct computation, and use it in
brin_getinsertbuffer's callers before calling that routine, to raise an
early error.
I became aware of the possiblity of a problem in this area while working
on ccc4c074994d734. There's no archived discussion about it, but it's
easy to reproduce a problem in the unpatched code with something like
CREATE TABLE t (a text);
CREATE INDEX ti ON t USING brin (a) WITH (pages_per_range=1);
for length in `seq 8000 8196`
do
psql -f - <<EOF
TRUNCATE TABLE t;
INSERT INTO t VALUES ('z'), (repeat('a', $length));
EOF
done
Backpatch to 9.5, where BRIN was introduced.
Further tweak commit_ts.c so that on a standby the state is completely
consistent with what that in the master, rather than behaving
differently in the cases that the settings differ. Now in standby and
master the module should always be active or inactive in lockstep.
Author: Petr Jelínek, with some further tweaks by Álvaro Herrera.
Backpatch to 9.5, where commit timestamps were introduced.
Discussion: http://www.postgresql.org/message-id/5622BF9D.2010409@2ndquadrant.com
Bernd Helmle complained that CreateReplicationSlot() was assigning the
same value to the same variable twice, so we could remove one of them.
Code inspection reveals that we can actually remove both assignments:
according to the author the assignment was there for beauty of the
strlen line only, and another possible fix to that is to put the strlen
in its own line, so do that.
To be consistent within the file, refactor all duplicated strlen()
calls, which is what we do elsewhere in the backend anyway. In
basebackup.c, snprintf already returns the right length; no need for
strlen afterwards.
Backpatch to 9.4, where replication slots were introduced, to keep code
identical. Some of this is older, but the patch doesn't apply cleanly
and it's only of cosmetic value anyway.
Discussion: http://www.postgresql.org/message-id/BE2FD71DEA35A2287EA5F018@eje.credativ.lan
If the counterparty writes some data into the queue and then detaches,
it's wrong to return SHM_MQ_DETACHED right away. If we do that, we
fail to read whatever was written.
The shm_mq mechanism was intended to optionally notice when the process
on the other end of the queue fails to attach to the queue. It does
this by allowing the user to pass a BackgroundWorkerHandle; if the
background worker in question is launched and dies without attaching
to the queue, then we know it never will. This logic works OK in
blocking mode, but when called with nowait = true we fail to notice
that this has happened due to an asymmetry in the logic. Repair.
Reported off-list by Rushabh Lathia. Patch by me.
Commit bda76c1c8cfb1d11751ba6be88f0242850481733 caused both plus and
minus infinity to be rendered as "infinity", which is not only wrong
but inconsistent with the pre-9.4 behavior of to_json(). Fix that by
duplicating the coding in date_out/timestamp_out/timestamptz_out more
closely. Per bug #13687 from Stepan Perlov. Back-patch to 9.4, like
the previous commit.
In passing, also re-pgindent json.c, since it had gotten a bit messed up by
recent patches (and I was already annoyed by indentation-related problems
in back-patching this fix ...)
Per a report from Shay Rojansky, Npgsql sends ssl_renegotiation_limit=0
in the startup packet because it does not support renegotiation; other
clients which have not attempted to support renegotiation might well
behave similarly. The recent removal of this parameter forces them to
break compatibility with either current PostgreSQL versions, or
previous ones. Per discussion, the best solution is to accept the
parameter but only allow a value of 0.
Shay Rojansky, edited a little by me.
Instead, use transaction abort. Given an unlucky bout of latency, the
timeout would cancel the RESET itself. Buildfarm members gharial,
lapwing, mereswine, shearwater, and sungazer witness that. Back-patch
to 9.1 (all supported versions). The query_canceled test still could
timeout before entering its subtransaction; for whatever reason, that
has yet to happen on the buildfarm.
pg_regprefix was doing nothing with lookahead constraints, which would
be fine if it were the right kind of nothing, but it isn't: we have to
terminate our search for a fixed prefix, not just pretend the LACON arc
isn't there. Otherwise, if the current state has both a LACON outarc and a
single plain-color outarc, we'd falsely conclude that the color represents
an addition to the fixed prefix, and generate an extracted index condition
that restricts the indexscan too much. (See added regression test case.)
Terminating the search is conservative: we could traverse the LACON arc
(thus assuming that the constraint can be satisfied at runtime) and then
examine the outarcs of the linked-to state. But that would be a lot more
work than it seems worth, because writing a LACON followed by a single
plain character is a pretty silly thing to do.
This makes a difference only in rather contrived cases, but it's a bug,
so back-patch to all supported branches.
Revert our previous addition of "all" flags to copyins() and copyouts();
they're no longer needed, and were never anything but an unsightly hack.
Improve a couple of infelicities in the REG_DEBUG code for dumping
the NFA data structure, including adding code to count the total
number of states and arcs.
Add a couple of missed error checks.
Add some more documentation in the README file, and some regression tests
illustrating cases that exceeded the state-count limit and/or took
unreasonable amounts of time before this set of patches.
Back-patch to all supported branches.
This code previously counted the number of NFA states it created, and
complained if a limit was exceeded, so as to prevent bizarre regex patterns
from consuming unreasonable time or memory. That's fine as far as it went,
but the code paid no attention to how many arcs linked those states. Since
regexes can be contrived that have O(N) states but will need O(N^2) arcs
after fixempties() processing, it was still possible to blow out memory,
and take a long time doing it too. To fix, modify the bookkeeping to count
space used by both states and arcs.
I did not bother with including the "color map" in the accounting; it
can only grow to a few megabytes, which is not a lot in comparison to
what we're allowing for states+arcs (about 150MB on 64-bit machines
or half that on 32-bit machines).
Looking at some of the larger real-world regexes captured in the Tcl
regression test suite suggests that the most that is likely to be needed
for regexes found in the wild is under 10MB, so I believe that the current
limit has enough headroom to make it okay to keep it as a hard-wired limit.
In connection with this, redefine REG_ETOOBIG as meaning "regular
expression is too complex"; the previous wording of "nfa has too many
states" was already somewhat inapropos because of the error code's use
for stack depth overrun, and it was not very user-friendly either.
Back-patch to all supported branches.
The previous coding would create a new intermediate state every time it
wanted to interchange the ordering of two constraint arcs. Certain regex
features such as \Y can generate large numbers of parallel constraint arcs,
and if we needed to reorder the results of that, we created unreasonable
numbers of intermediate states. To improve matters, keep a list of
already-created intermediate states associated with the state currently
being considered by the outer loop; we can re-use such states to place all
the new arcs leading to the same destination or source.
I also took the trouble to redefine push() and pull() to have a less risky
API: they no longer delete any state or arc that the caller might possibly
have a pointer to, except for the specifically-passed constraint arc.
This reduces the risk of re-introducing the same type of error seen in
the failed patch for CVE-2007-4772.
Back-patch to all supported branches.