FlightAware is still maintaining this, and indeed is seemingly being
more active with it than the pgtclng fork is. List both, for the
time being anyway.
In the back branches, also back-port commit e20f679f6 and other
recent updates to the client-interfaces list. I think these are
probably of current interest to users of back branches. I did
not touch the list of externally maintained PLs in the back
branches, though. Those are much more likely to be server version
sensitive, and I don't know which of these PLs work all the way back.
Discussion: https://postgr.es/m/20170730162612.1449.58796@wrigleys.postgresql.org
Back-patch commit 76a1c97bf21c301f61bb41345e0cdd0d365b2086 into
the older branches (9.5 and before). This is needed because perl
5.26 and later treats the case as an error not just a warning.
Original patch by Andrew Dunstan, need for back-patch noted by
Ashutosh Sharma
Discussion: https://postgr.es/m/CAE9k0PkfNcmj9pA7Yx4qQ=K=3aY4TuiRhp7KYpayDWm9MYsnjg@mail.gmail.com
XLByteToSeg and XLByteToPrevSeg calculate only a segment number. The
definition of these macros were modified by commit
dfda6ebaec6763090fb78b458a979b558c50b39b but the comment remain
unchanged.
Patch by Yugo Nagata. Back patched to 9.3 and beyond.
Previously the docs just said "5.8 or later". Experimentation shows
that while you can build on Unix from a git checkout with 5.8.0,
compiling recent PL/Perl requires at least 5.8.1, and you won't be
able to run the TAP tests with less than 5.8.3 because that's when
they added "prove". (I do not have any information on just what the
MSVC build scripts require.)
Since all these versions are quite ancient, let's not split hairs
in the docs, but just say that 5.8.3 is the minimum requirement.
Discussion: https://postgr.es/m/16894.1501392088@sss.pgh.pa.us
Fix assorted places that had not bothered with the convention of
prefixing catalog and function names with "pg_catalog.". That
could possibly result in query failure when running with a nondefault
search_path. Also fix two places that weren't quoting OID literals.
I think the latter hasn't mattered much since about 7.3, but it's still
a bad idea to be doing it in 99 places and not in 2 others.
Also remove a useless EXISTS sub-select that someone had stuck into
describeOneTableDetails' queries for child tables. We just got the OID
out of pg_class, so I hardly see how checking that it exists in pg_class
was doing anything helpful.
In passing, try to improve the emitted formatting of a couple of
these queries, though I didn't work really hard on that. And merge
unnecessarily duplicative coding in some other places.
Much of this was new in HEAD, but some was quite old; back-patch
as appropriate.
If several sessions are concurrently locking a tuple update chain with
nonconflicting lock modes using an old snapshot, and they all succeed,
it may happen that some of them fail because of restarting the loop (due
to a concurrent Xmax change) and getting an error in the subsequent pass
while trying to obtain a tuple lock that they already have in some tuple
version.
This can only happen with very high concurrency (where a row is being
both updated and FK-checked by multiple transactions concurrently), but
it's been observed in the field and can have unpleasant consequences
such as an FK check failing to see a tuple that definitely exists:
ERROR: insert or update on table "child_table" violates foreign key constraint "fk_constraint_name"
DETAIL: Key (keyid)=(123456) is not present in table "parent_table".
(where the key is observably present in the table).
Discussion: https://postgr.es/m/20170714210011.r25mrff4nxjhmf3g@alvherre.pgsql
Trading a little too heavily on letting the code path be the same whether
we were creating shared data structures or only attaching to them,
InitPredicateLocks() inserted the "scratch" PredicateLockTargetHash entry
unconditionally. This is just wrong if we're in a postmaster child,
which would only reach this code in EXEC_BACKEND builds. Most of the
time, the hash_search(HASH_ENTER) call would simply report that the
entry already existed, causing no visible effect since the code did not
bother to check for that possibility. However, if this happened while
some other backend had transiently removed the "scratch" entry, then
that other backend's eventual RestoreScratchTarget would suffer an
assert failure; this appears to be the explanation for a recent failure
on buildfarm member culicidae. In non-assert builds, there would be
no visible consequences there either. But nonetheless this is a pretty
bad bug for EXEC_BACKEND builds, for two reasons:
1. Each new backend would perform the hash_search(HASH_ENTER) call
without holding any lock that would prevent concurrent access to the
PredicateLockTargetHash hash table. This creates a low but certainly
nonzero risk of corruption of that hash table.
2. In the event that the race condition occurred, by reinserting the
scratch entry too soon, we were defeating the entire purpose of the
scratch entry, namely to guarantee that transaction commit could move
hash table entries around with no risk of out-of-memory failure.
The odds of an actual OOM failure are quite low, but not zero, and if
it did happen it would again result in corruption of the hash table.
The user-visible symptoms of such corruption are a little hard to predict,
but would presumably amount to misbehavior of SERIALIZABLE transactions
that'd require a crash or postmaster restart to fix.
To fix, just skip the hash insertion if IsUnderPostmaster. I also
inserted a bunch of assertions that the expected things happen
depending on whether IsUnderPostmaster is true. That might be overkill,
since most comparable code in other functions isn't quite that paranoid,
but once burnt twice shy.
In passing, also move a couple of lines to places where they seemed
to make more sense.
Diagnosis of problem by Thomas Munro, patch by me. Back-patch to
all supported branches.
Discussion: https://postgr.es/m/10593.1500670709@sss.pgh.pa.us
Various cases involving renaming of view columns are handled by having
make_viewdef pass down the view's current relation tupledesc to
get_query_def, which then takes care to use the column names from the
tupledesc for the output column names of the SELECT. For some reason
though, we'd missed teaching make_ruledef to do similarly when it is
printing an ON SELECT rule, even though this is exactly the same case.
The results from pg_get_ruledef would then be different and arguably wrong.
In particular, this breaks pre-v10 versions of pg_dump, which in some
situations would define views by means of emitting a CREATE RULE ... ON
SELECT command. Third-party tools might not be happy either.
In passing, clean up some crufty code in make_viewdef; we'd apparently
modernized the equivalent code in make_ruledef somewhere along the way,
and missed this copy.
Per report from Gilles Darold. Back-patch to all supported versions.
Discussion: https://postgr.es/m/ec05659a-40ff-4510-fc45-ca9d965d0838@dalibo.com
pg_dump with the --clean option failed to emit DROP EVENT TRIGGER
commands for event triggers. In a closely related oversight,
it also did not emit ALTER OWNER commands for event triggers.
Since only superusers can create event triggers, the latter oversight
is of little practical consequence ... but if we're going to record
an owner for event triggers, then surely pg_dump should preserve it.
Per complaint from Greg Atkins. Back-patch to 9.3 where event triggers
were introduced.
Discussion: https://postgr.es/m/20170722191142.yi4e7tzcg3iacclg@gmail.com
The new test cases added in commit 8bf58c0d9 turn out to have output
that can vary depending on the lc_messages setting prevailing on the
test server. Hide the remote end's error messages to ensure stable
output. This isn't a terribly desirable solution; we'd rather know
that the connection failed for the expected reason and not some other
one. But there seems little choice for the moment.
Per buildfarm.
Discussion: https://postgr.es/m/18419.1500658570@sss.pgh.pa.us
Previously, postgres_fdw would keep on using an existing connection even
if the user did ALTER SERVER or ALTER USER MAPPING commands that should
affect connection parameters. Teach it to watch for catcache invals
on these catalogs and re-establish connections when the relevant catalog
entries change. Per bug #14738 from Michal Lis.
In passing, clean up some rather crufty decisions in commit ae9bfc5d6
about where fields of ConnCacheEntry should be reset. We now reset
all the fields whenever we open a new connection.
Kyotaro Horiguchi, reviewed by Ashutosh Bapat and myself.
Back-patch to 9.3 where postgres_fdw appeared.
Discussion: https://postgr.es/m/20170710113917.7727.10247@wrigleys.postgresql.org
Claiming that NATURAL JOIN is equivalent to CROSS JOIN when there are
no common column names is only strictly correct if it's an inner join;
you can't say e.g. CROSS LEFT JOIN. Better to explain it as meaning
JOIN ON TRUE, instead. Per a suggestion from David Johnston.
Discussion: https://postgr.es/m/CAKFQuwb+mYszQhDS9f_dqRrk1=Pe-S6D=XMkAXcDf4ykKPmgKQ@mail.gmail.com
Normally, a JoinExpr would have empty "quals" only if it came from CROSS
JOIN syntax. However, it's possible to get to this state by specifying
NATURAL JOIN between two tables with no common column names, and there
might be other ways too. The code previously printed no ON clause if
"quals" was empty; that's right for CROSS JOIN but syntactically invalid
if it's some type of outer join. Fix by printing ON TRUE in that case.
This got broken by commit 2ffa740be, which stopped using NATURAL JOIN
syntax in ruleutils output due to its brittleness in the face of
column renamings. Back-patch to 9.3 where that commit appeared.
Per report from Tushar Ahuja.
Discussion: https://postgr.es/m/98b283cd-6dda-5d3f-f8ac-87db8c76a3da@enterprisedb.com
It seems pretty confusing to have tests named both largeobject and
large_object. The latter is of very recent vintage (commit ff992c074),
so get rid of it in favor of merging into the former.
Also, enable the LO comment test that was added by commit 70ad7ed4e,
since the later commit added the then-missing pg_upgrade functionality.
The large_object.sql test case is almost completely redundant with that,
but not quite: it seems like creating a user-defined LO with an OID in
the system range might be an interesting case for pg_upgrade, so let's
keep it.
Like the earlier patch, back-patch to all supported branches.
Discussion: https://postgr.es/m/18665.1500306372@sss.pgh.pa.us
When writing a backup to stdout with pg_basebackup on Windows, put stdout
to binary mode. Any CR bytes in the output will otherwise be output
incorrectly as CR+LF.
In the passing, standardize on using "_setmode" instead of "setmode", for
the sake of consistency. They both do the same thing, but according to
MSDN documentation, setmode is deprecated.
Fixes bug #14634, reported by Henry Boehlert. Patch by Haribabu Kommi.
Backpatch to all supported versions.
Discussion: https://www.postgresql.org/message-id/20170428082818.24366.13134@wrigleys.postgresql.org
The grammar will only accept something syntactically similar to a function
call in a function-in-FROM expression. However, there are various ways
to input something that ruleutils.c won't deparse that way, potentially
leading to a view or rule that fails dump/reload. Fix by inserting a
dummy CAST around anything that isn't going to deparse as a function
(which is one of the ways to get something like that in there in the
first place).
In HEAD, also make use of the infrastructure added by this to avoid
emitting unnecessary parentheses in CREATE INDEX deparsing. I did
not change that in back branches, thinking that people might find it
to be unexpected/unnecessary behavioral change.
In HEAD, also fix incorrect logic for when to add extra parens to
partition key expressions. Somebody apparently thought they could
get away with simpler logic than pg_get_indexdef_worker has, but
they were wrong --- a counterexample is PARTITION BY LIST ((a[1])).
Ignoring the prettyprint flag for partition expressions isn't exactly
a nice solution anyway.
This has been broken all along, so back-patch to all supported branches.
Discussion: https://postgr.es/m/10477.1499970459@sss.pgh.pa.us
The race condition goes like this:
1. GetNewTransactionId advances nextXid e.g. from 100 to 101
2. GetOldestActiveTransactionId reads the new nextXid, 101
3. GetOldestActiveTransactionId loops through the proc array. There are no
active XIDs there, so it returns 101 as the oldest active XID.
4. GetNewTransactionid stores XID 100 to MyPgXact->xid
So, GetOldestActiveTransactionId returned XID 101, even though 100 only
just started and is surely still running.
This would be hard to hit in practice, and even harder to spot any ill
effect if it happens. GetOldestActiveTransactionId is only used when
creating a checkpoint in a master server, and the race condition can only
happen on an online checkpoint, as there are no backends running during a
shutdown checkpoint. The oldestActiveXid value of an online checkpoint is
only used when starting up a hot standby server, to determine the starting
point where pg_subtrans is initialized from. For the race condition to
happen, there must be no other XIDs in the proc array that would hold back
the oldest-active XID value, which means that the missed XID must be a top
transaction's XID. However, pg_subtrans is not used for top XIDs, so I
believe an off-by-one error is in fact inconsequential. Nevertheless, let's
fix it, as it's clearly wrong and the fix is simple.
This has been wrong ever since hot standby was introduced, so backport to
all supported versions.
Discussion: https://www.postgresql.org/message-id/e7258662-82b6-7a45-56d4-99b337a32bf7@iki.fi
Further investigation shows that ruleutils isn't quite up to speed either
for cases where we have a domain-over-array: it needs to be prepared to
look past a CoerceToDomain at the top level of field and element
assignments, else it decompiles them incorrectly. Potentially this would
result in failure to dump/reload a rule, if it looked like the one in the
new test case. (I also added a test for EXPLAIN; that output isn't broken,
but clearly we need more test coverage here.)
Like commit b1cb32fb6, this bug is reachable in cases we already support,
so back-patch all the way.
compute_tsvector_stats() detoasted and kept in memory every tsvector value
in the sample, but that can be a lot of memory. The original bug report
described a case using over 10 gigabytes, with statistics target of 10000
(the maximum).
To fix, allocate a separate copy of just the lexemes that we keep around,
and free the detoasted tsvector values as we go. This adds some palloc/pfree
overhead, when you have a lot of distinct lexemes in the sample, but it's
better than running out of memory.
Fixes bug #14654 reported by James C. Reviewed by Tom Lane. Backport to
all supported versions.
Discussion: https://www.postgresql.org/message-id/20170514200602.1451.46797@wrigleys.postgresql.org
We allow INSERT and UPDATE commands to assign to the same column more than
once, as long as the assignments are to subfields or elements rather than
the whole column. However, this failed when the target column was a domain
over array rather than plain array. Fix by teaching process_matched_tle()
to look through CoerceToDomain nodes, and add relevant test cases.
Also add a group of test cases exercising domains over array of composite.
It's doubtless accidental that CREATE DOMAIN allows this case while not
allowing straight domain over composite; but it does, so we'd better make
sure we don't break it. (I could not find any documentation mentioning
either side of that, so no doc changes.)
It's been like this for a long time, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/4206.1499798337@sss.pgh.pa.us
We've heard occasional reports of backend launch failing because
pgwin32_ReserveSharedMemoryRegion() fails, indicating that something
has already used that address space in the child process. It's not
very clear what, given that we disable ASLR in Windows builds, but
suspicion falls on antivirus products. It'd be better if we didn't
have to disable ASLR, anyway. So let's try to ameliorate the problem
by retrying the process launch after such a failure, up to 100 times.
Patch by me, based on previous work by Amit Kapila and others.
This is a longstanding issue, so back-patch to all supported branches.
Discussion: https://postgr.es/m/CAA4eK1+R6hSx6t_yvwtx+NRzneVp+MRqXAdGJZChcau8Uij-8g@mail.gmail.com
Original wording had confusingly vague antecedent for "they", so replace
that with a more repetitive but clearer formulation. In passing, make the
link to the installation requirements section more specific. Per gripe
from Martin Mai, though this is not the fix he initially proposed.
Discussion: https://postgr.es/m/CAN_NWRu-cWuNaiXUjV3m4H-riWURuPW=j21bSaLADs6rjjzXgQ@mail.gmail.com
Buildfarm evidence shows that TCP_KEEPALIVE_THRESHOLD doesn't exist
after all on Solaris < 11. This means we need to take positive action to
prevent the TCP_KEEPALIVE code path from being taken on that platform.
I've chosen to limit it with "&& defined(__darwin__)", since it's unclear
that anyone else would follow Apple's precedent of spelling the symbol
that way.
Also, follow a suggestion from Michael Paquier of eliminating code
duplication by defining a couple of intermediate symbols for the
socket option.
In passing, make some effort to reduce the number of translatable messages
by replacing "setsockopt(foo) failed" with "setsockopt(%s) failed", etc,
throughout the affected files. And update relevant documentation so
that it doesn't claim to provide an exhaustive list of the possible
socket option names.
Like the previous commit (f0256c774), back-patch to all supported branches.
Discussion: https://postgr.es/m/20170627163757.25161.528@wrigleys.postgresql.org
Turns out that the socket option for this is named TCP_KEEPALIVE_THRESHOLD,
at least according to the tcp(7P) man page for Solaris 11. (But since that
text refers to "SunOS", it's likely pretty ancient.) It appears that the
symbol TCP_KEEPALIVE does get defined on that platform, but it doesn't
seem to represent a valid protocol-level socket option. This leads to
bleats in the postmaster log, and no tcp_keepalives_idle functionality.
Per bug #14720 from Andrey Lizenko, as well as an earlier report from
Dhiraj Chawla that nobody had followed up on. The issue's been there
since we added the TCP_KEEPALIVE code path in commit 5acd417c8, so
back-patch to all supported branches.
Discussion: https://postgr.es/m/20170627163757.25161.528@wrigleys.postgresql.org
check_agg_arguments_walker threw an error upon seeing a SRF or window
function, but that is too aggressive: if the function is within a
sub-select then it's perfectly fine. I broke the SRF case in commit
0436f6bde by copying the logic for window functions ... but that was
broken too, and had been since commit eaccfded9.
Repair both cases in HEAD, and the window function case back to 9.3.
9.2 gets this right.
When a walreceiver dies, the startup process will notice that and send
a PMSIGNAL_START_WALRECEIVER signal to the postmaster, asking for a new
walreceiver to be launched. There's a race condition, which at least
in HEAD is very easy to hit, whereby the postmaster might see that
signal before it processes the SIGCHLD from the walreceiver process.
In that situation, sigusr1_handler() just dropped the start request
on the floor, reasoning that it must be redundant. Eventually, after
10 seconds (WALRCV_STARTUP_TIMEOUT), the startup process would make a
fresh request --- but that's a long time if the connection could have
been re-established almost immediately.
Fix it by setting a state flag inside the postmaster that we won't
clear until we do launch a walreceiver. In cases where that results
in an extra walreceiver launch, it's up to the walreceiver to realize
it's unwanted and go away --- but we have, and need, that logic anyway
for the opposite race case.
I came across this through investigating unexpected delays in the
src/test/recovery TAP tests: it manifests there in test cases where
a master server is stopped and restarted while leaving streaming
slaves active.
This logic has been broken all along, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/21344.1498494720@sss.pgh.pa.us
The stats collector disregards inquiry messages that bear a cutoff_time
before when it last wrote the relevant stats file. That's fine, but at
startup when it reads the "permanent" stats files, it absorbed their
timestamps as if they were the times at which the corresponding temporary
stats files had been written. In reality, of course, there's no data
out there at all. This led to disregarding inquiry messages soon after
startup if the postmaster had been shut down and restarted within less
than PGSTAT_STAT_INTERVAL; which is a pretty common scenario, both for
testing and in the field. Requesting backends would hang for 10 seconds
and then report failure to read statistics, unless they got bailed out
by some other backend coming along and making a newer request within
that interval.
I came across this through investigating unexpected delays in the
src/test/recovery TAP tests: it manifests there because the autovacuum
launcher hangs for 10 seconds when it can't get statistics at startup,
thus preventing a second shutdown from occurring promptly. We might
want to do some things in the autovac code to make it less prone to
getting stuck that way, but this change is a good bug fix regardless.
In passing, also fix pgstat_read_statsfiles() to ensure that it
re-zeroes its global stats variables if they are corrupted by a
short read from the stats file. (Other reads in that function
go into temp variables, so that the issue doesn't arise.)
This has been broken since we created the separation between permanent
and temporary stats files in 8.4, so back-patch to all supported branches.
Discussion: https://postgr.es/m/16860.1498442626@sss.pgh.pa.us
When promoting a standby just after a XLOG_SWITCH record was replayed,
and next segment(s) are already are locally available (via walsender,
restore_command + trigger/recovery target), that segment could
accidentally be recycled onto the past of the new timeline. Later
checkpointer would create a .ready file for it, assuming there was an
error during creation, and it would get archived. That causes trouble
if another standby is later brought up from a basebackup from before
the timeline creation, because it would try to read the
segment, because XLogFileReadAnyTLI just tries all possible timelines,
which doesn't have valid contents. Thus replay would fail.
The problem, if already occurred, can be fixed by removing the segment
and/or having restore_command filter it out.
The reason for the creation of such "phantom" segments was, that after
an XLOG_SWITCH record the EndOfLog variable points to the beginning of
the next segment, and RemoveXlogFile() used XLByteToPrevSeg().
Normally RemoveXlogFile() doing so is harmless, because the last
segment will still exist preventing InstallXLogFileSegment() from
causing harm, but just after promotion there's no previous segment on
the new timeline.
Fix that by using XLByteToSeg() instead of XLByteToPrevSeg().
Author: Andres Freund
Reported-By: Greg Burek
Discussion: https://postgr.es/m/20170619073026.zcwpe6mydsaz5ygd@alap3.anarazel.de
Backpatch: 9.2-, bug older than all supported versions
When commit 0f33a719fdbb5d8c43839ea0d2c90cd03e2af2d2 removed the
instructions to start/stop the new cluster before running rsync, it was
now possible for pg_resetwal/pg_resetxlog to leave the final WAL record
at wal_level=minimum, preventing upgraded standby servers from
reconnecting.
This patch fixes that by having pg_upgrade unconditionally start/stop
the new cluster after pg_resetwal/pg_resetxlog has run.
Backpatch through 9.2 since, though the instructions were added in PG
9.5, they worked all the way back to 9.2.
Discussion: https://postgr.es/m/20170620171844.GC24975@momjian.us
Backpatch-through: 9.2
When materialized views were added, psql's \d commands were made to
treat them as a separate object category ... but not everyplace in the
documentation or comments got the memo.
Noted by David Johnston. Back-patch to 9.3 where matviews came in.
Discussion: https://postgr.es/m/CAKFQuwb27M3VXRhHErjCpkWwN9eKThbqWb1=trtoXi9_ejqPXQ@mail.gmail.com
The combination of -Z -Fp and output to stdout resulted in corrupted
output data, because we left stdout in text mode, resulting in newline
conversion being done on the compressed stream. Switch stdout to binary
mode for this case, at the same place where we do it for non-text output
formats.
Report and patch by Kuntal Ghosh, tested by Ashutosh Sharma and Neha
Sharma. Back-patch to all supported branches.
Discussion: https://postgr.es/m/CAGz5QCJPvbBjXAmJuGx1B_41yVCetAJhp7rtaDf7XQGWuB1GSw@mail.gmail.com
When a new base type is created using the old-style procedure of first
creating the input/output functions with "opaque" in place of the base
type, the "opaque" argument/return type is changed to the final base type,
on CREATE TYPE. However, we did not create a pg_depend record when doing
that, so the functions were left not depending on the type.
Fixes bug #14706, reported by Karen Huddleston.
Discussion: https://www.postgresql.org/message-id/20170614232259.1424.82774@wrigleys.postgresql.org
We had three occurrences of essentially the same coding pattern
wherein we tried to retrieve a query result from a libpq connection
without blocking. In the case where PQconsumeInput failed (typically
indicating a lost connection), all three loops simply gave up and
returned, forgetting to clear any previously-collected PGresult
object. Since those are malloc'd not palloc'd, the oversight results
in a process-lifespan memory leak.
One instance, in libpqwalreceiver, is of little significance because
the walreceiver process would just quit anyway if its connection fails.
But we might as well fix it.
The other two instances, in postgres_fdw, are somewhat more worrisome
because at least in principle the scenario could be repeated, allowing
the amount of memory leaked to build up to something worth worrying
about. Moreover, in these cases the loops contain CHECK_FOR_INTERRUPTS
calls, as well as other calls that could potentially elog(ERROR),
providing another way to exit without having cleared the PGresult.
Here we need to add PG_TRY logic similar to what exists in quite a
few other places in postgres_fdw.
Coverity noted the libpqwalreceiver bug; I found the other two cases
by checking all calls of PQconsumeInput.
Back-patch to all supported versions as appropriate (9.2 lacks
postgres_fdw, so this is really quite unexciting for that branch).
Discussion: https://postgr.es/m/22620.1497486981@sss.pgh.pa.us
Commit f039eaac7131ef2a4cf63a10cf98486f8bcd09d2, later back-patched
with commit 1b812afb0eafe125b820cc3b95e7ca03821aa675, allowed many of
the queries issued by postgres_fdw to fetch remote data to respond to
cancel interrupts in a timely fashion. However, it didn't do anything
about the transaction control commands, which remained
noninterruptible.
Improve the situation by changing do_sql_command() to retrieve query
results using pgfdw_get_result(), which uses the asynchronous
interface to libpq so that it can check for interrupts every time
libpq returns control. Since this might result in a situation
where we can no longer be sure that the remote transaction state
matches the local transaction state, add a facility to force all
levels of the local transaction to abort if we've lost track of
the remote state; without this, an apparently-successful commit of
the local transaction might fail to commit changes made on the
remote side. Also, add a 60-second timeout for queries issue during
transaction abort; if that expires, give up and mark the state of
the connection as unknown. Drop all such connections when we exit
the local transaction. Together, these changes mean that if we're
aborting the local toplevel transaction anyway, we can just drop the
remote connection in lieu of waiting (possibly for a very long time)
for it to complete an abort.
This still leaves quite a bit of room for improvement. PQcancel()
has no asynchronous interface, so if we get stuck sending the cancel
request we'll still hang. Also, PQsetnonblocking() is not used, which
means we could block uninterruptibly when sending a query. There
might be some other optimizations possible as well. Nonetheless,
this allows us to escape a wait for an unresponsive remote server
quickly in many more cases than previously.
Report by Suraj Kharage. Patch by me and Rafia Sabih. Review
and testing by Amit Kapila and Tushar Ahuja.
Discussion: http://postgr.es/m/CAF1DzPU8Kx+fMXEbFoP289xtm3bz3t+ZfxhmKavr98Bh-C0TqQ@mail.gmail.com
If authentication over an SSL connection fails, with sslmode=prefer,
libpq will reconnect without SSL and retry. However, we did not clear
the variables related to GSS, SSPI, and SASL authentication state, when
reconnecting. Because of that, the second authentication attempt would
always fail with a "duplicate GSS/SASL authentication request" error.
pg_SSPI_startup did not check for duplicate authentication requests like
the corresponding GSS and SASL functions, so with SSPI, you would leak
some memory instead.
Another way this could manifest itself, on version 10, is if you list
multiple hostnames in the "host" parameter. If the first server requests
Kerberos or SCRAM authentication, but it fails, the attempts to connect to
the other servers will also fail with "duplicate authentication request"
errors.
To fix, move the clearing of authentication state from closePGconn to
pgDropConnection, so that it is cleared also when re-connecting.
Patch by Michael Paquier, with some kibitzing by me.
Backpatch down to 9.3. 9.2 has the same bug, but the code around closing
the connection is somewhat different, so that this patch doesn't apply.
To fix this in 9.2, I think we would need to back-port commit 210eb9b743
first, and then apply this patch. However, given that we only bumped into
this in our own testing, we haven't heard any reports from users about
this, and that 9.2 will be end-of-lifed in a couple of months anyway, it
doesn't seem worth the risk and trouble.
Discussion: https://www.postgresql.org/message-id/CAB7nPqRuOUm0MyJaUy9L3eXYJU3AKCZ-0-03=-aDTZJGV4GyWw@mail.gmail.com