Commit 92785dac2 copied some logic related to advancement of inStart
from pqParseInput3 into getRowDescriptions and getAnotherTuple,
because it wanted to allow user-defined row processor callbacks to
potentially longjmp out of the library, and inStart would have to be
updated before that happened to avoid an infinite loop. We later
decided that that API was impossibly fragile and reverted it, but
we didn't undo all of the related code changes, and this bit of
messiness survived. Undo it now so that there's just one place in
pqParseInput3's processing where inStart is advanced; this will
simplify addition of better tracing support.
getParamDescriptions had grown similar processing somewhere along
the way (not in 92785dac2; I didn't track down just when), but it's
actually buggy because its handling of corrupt-message cases seems to
have been copied from the v2 logic where we lacked a known message
length. The cases where we "goto not_enough_data" should not simply
return EOF, because then we won't consume the message, potentially
creating an infinite loop. That situation now represents a
definitively corrupt message, and we should report it as such.
Although no field reports of getParamDescriptions getting stuck in
a loop have been seen, it seems appropriate to back-patch that fix.
I chose to back-patch all of this to keep the logic looking more alike
in supported branches.
Discussion: https://postgr.es/m/2217283.1615411989@sss.pgh.pa.us
We had "short *mdy" in the extern declarations, but "short mdy[3]"
in the actual function definitions. Per C99 these are equivalent,
but recent versions of gcc have started to issue warnings about
the inconsistency. Clean it up before the warnings get any more
widespread.
Back-patch, in case anyone wants to build older PG versions with
bleeding-edge compilers.
Discussion: https://postgr.es/m/2401575.1611764534@sss.pgh.pa.us
We must not set the "done" flag until after we've executed the
initialization function. Otherwise, other threads can fall through
the initial unlocked test before initialization is really complete.
This has been seen to cause rare failures of ecpg's thread/descriptor
test, and it could presumably cause other sorts of misbehavior in
threaded ECPG-using applications, since ecpglib relies on
pthread_once() in several places.
Diagnosis and patch by me, based on investigation by Alexander Lakhin.
Back-patch to all supported branches (the bug dates to 2007).
Discussion: https://postgr.es/m/16685-d6cd241872c101d3@postgresql.org
The Windows documentation insists that every WSAStartup call should
have a matching WSACleanup call. However, if that ever had actual
relevance, it wasn't in this century. Every remotely-modern Windows
kernel is capable of cleaning up when a process exits without doing
that, and must be so to avoid resource leaks in case of a process
crash. Moreover, Postgres backends have done WSAStartup without
WSACleanup since commit 4cdf51e64 in 2004, and we've never seen any
indication of a problem with that.
libpq's habit of doing WSAStartup during connection start and
WSACleanup during shutdown is also rather inefficient, since a
series of non-overlapping connection requests leads to repeated,
quite expensive DLL unload/reload cycles. We document a workaround
for that (having the application call WSAStartup for itself), but
that's just a kluge. It's also worth noting that it's far from
uncommon for applications to exit without doing PQfinish, and
we've not heard reports of trouble from that either.
However, the real reason for acting on this is that recent
experiments by Alexander Lakhin show that calling WSACleanup
during PQfinish is triggering the symptom we occasionally see
that a process using libpq fails to emit expected stdio output.
Therefore, let's change libpq so that it calls WSAStartup only
once per process, during the first connection attempt, and never
calls WSACleanup at all.
While at it, get rid of the only other WSACleanup call in our code
tree, in pg_dump/parallel.c; that presumably is equally useless.
Back-patch of HEAD commit 7d00a6b2d.
Discussion: https://postgr.es/m/ac976d8c-03df-d6b8-025c-15a2de8d9af1@postgrespro.ru
Historically there's been a hard-wired assumption here that no line of
a .pgpass file could be as long as NAMEDATALEN*5 bytes. That's a bit
shaky to start off with, because (a) there's no reason to suppose that
host names fit in NAMEDATALEN, and (b) this figure fails to allow for
backslash escape characters. However, it fails completely if someone
wants to use a very long password, and we're now hearing reports of
people wanting to use "security tokens" that can run up to several
hundred bytes. Another angle is that the file is specified to allow
comment lines, but there's no reason to assume that long comment lines
aren't possible.
Rather than guessing at what might be a more suitable limit, let's
replace the fixed-size buffer with an expansible PQExpBuffer. That
adds one malloc/free cycle to the typical use-case, but that's surely
pretty cheap relative to the I/O this code has to do.
Also, add TAP test cases to exercise this code, because there was no
test coverage before.
This reverts most of commit 2eb3bc588, as there's no longer a need for
a warning message about overlength .pgpass lines. (I kept the explicit
check for comment lines, though.)
In HEAD and v13, this also fixes an oversight in 74a308cf5: there's not
much point in explicit_bzero'ing the line buffer if we only do so in two
of the three exit paths.
Back-patch to all supported branches, except that the test case only
goes back to v10 where src/test/authentication/ was added.
Discussion: https://postgr.es/m/4187382.1598909041@sss.pgh.pa.us
A couple of test cases had connect_timeout=14, a value that seems
to have been plucked from a hat. While it's more than sufficient
for normal cases, slow/overloaded buildfarm machines can get a timeout
failure here, as per recent report from "sungazer". Increase to 180
seconds, which is in line with our typical timeouts elsewhere in
the regression tests.
Back-patch to 9.6; the code looks different in 9.5, and this doesn't
seem to be quite worth the effort to adapt to that.
Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2020-08-04%2007%3A12%3A22
Checking if Subject Alternative Names (SANs) from a certificate match
with the hostname connected to leaked memory after each lookup done.
This is broken since acd08d7 that added support for SANs in SSL
certificates, so backpatch down to 9.5.
Author: Roman Peshkurov
Reviewed-by: Hamid Akhtar, Michael Paquier, David Steele
Discussion: https://postgr.es/m/CALLDf-pZ-E3mjxd5=bnHsDu9zHEOnpgPgdnO84E2RuwMCjjyPw@mail.gmail.com
Backpatch-through: 9.5
We realized years ago that it's better for libpq to accept all
connection parameters syntactically, even if some are ignored or
restricted due to lack of the feature in a particular build.
However, that lesson from the SSL support was for some reason never
applied to the GSSAPI support. This is causing various buildfarm
members to have problems with a test case added by commit 6136e94dc,
and it's just a bad idea from a user-experience standpoint anyway,
so fix it.
While at it, fix some places where parameter-related infrastructure
was added with the aid of a dartboard, or perhaps with the aid of
the anti-pattern "add new stuff at the end". It should be safe
to rearrange the contents of struct pg_conn even in released
branches, since that's private to libpq (and we'd have to move
some fields in some builds to fix this, anyway).
Back-patch to all supported branches.
Discussion: https://postgr.es/m/11297.1576868677@sss.pgh.pa.us
When using %b or %B patterns to format a date, the code was simply using
tm_mon as an index into array of month names. But that is wrong, because
tm_mon is 1-based, while array indexes are 0-based. The result is we
either use name of the next month, or a segfault (for December).
Fix by subtracting 1 from tm_mon for both patterns, and add a regression
test triggering the issue. Backpatch to all supported versions (the bug
is there far longer, since at least 2003).
Reported-by: Paul Spencer
Backpatch-through: 9.4
Discussion: https://postgr.es/m/16143-0d861eb8688d3fef%40postgresql.org
Declaring this in the client-visible header ecpglib.h was a pretty
poor decision. It's not meant to be application-callable (and if
it was, putting it outside the extern "C" { ... } wrapper means
that C++ clients would fail to call it). And the declaration would
not even compile for a client, anyway, since it would not have the
macro pg_attribute_format_arg(). Fortunately, it seems that no
clients have tried to include this header with ENABLE_NLS defined,
or we'd have gotten complaints about that. But we have no business
putting such a restriction on client code.
Move the declaration to ecpglib_extern.h, since in fact nothing
outside src/interfaces/ecpg/ecpglib/ needs to call it.
The practical effect of this is just that clients can now safely
#include ecpglib.h while having ENABLE_NLS defined, but that seems
like enough of a reason to back-patch it.
Discussion: https://postgr.es/m/20590.1573069709@sss.pgh.pa.us
libpq failed to ignore Windows-style newlines in connection service files.
This normally wasn't a problem on Windows itself, because fgets() would
convert \r\n to just \n. But if libpq were running inside a program that
changes the default fopen mode to binary, it would see the \r's and think
they were data. In any case, it's project policy to ignore \r in text
files unconditionally, because people sometimes try to use files with
DOS-style newlines on Unix machines, where the C library won't hide that
from us.
Hence, adjust parseServiceFile() to ignore \r as well as \n at the end of
the line. In HEAD, go a little further and make it ignore all trailing
whitespace, to match what it's always done with leading whitespace.
In HEAD, also run around and fix up everyplace where we have
newline-chomping code to make all those places look consistent and
uniformly drop \r. It is not clear whether any of those changes are
fixing live bugs. Most of the non-cosmetic changes are in places that
are reading popen output, and the jury is still out as to whether popen
on Windows can return \r\n. (The Windows-specific code in pipe_read_line
seems to think so, but our lack of support for this elsewhere suggests
maybe it's not a problem in practice.) Hence, I desisted from applying
those changes to back branches, except in run_ssl_passphrase_command()
which is new enough and little-tested enough that we'd probably not have
heard about any problems there.
Tom Lane and Michael Paquier, per bug #15827 from Jorge Gustavo Rocha.
Back-patch the parseServiceFile() change to all supported branches,
and the run_ssl_passphrase_command() change to v11 where that was added.
Discussion: https://postgr.es/m/15827-e6ba53a3a7ed543c@postgresql.org
When a client connects to a rogue server sending specifically-crafted
messages, this can suffice to execute arbitrary code as the operating
system account used by the client.
While on it, fix one error handling when decoding an incorrect salt
included in the first message received from server.
Author: Michael Paquier
Reviewed-by: Jonathan Katz, Heikki Linnakangas
Security: CVE-2019-10164
Backpatch-through: 10
On many modern platforms, /etc/localtime is a symlink to a file within the
IANA database. Reading the symlink lets us find out the name of the system
timezone directly, without going through the brute-force search embodied in
scan_available_timezones(). This shortens the runtime of initdb by some
tens of ms, which is helpful for the buildfarm, and it also allows us to
reliably select the same zone name the system was actually configured for,
rather than possibly choosing one of IANA's many zone aliases. (For
example, in a system configured for "Asia/Tokyo", the brute-force search
would not choose that name but its alias "Japan", on the grounds of the
latter string being shorter. More surprisingly, "Navajo" is preferred
to either "America/Denver" or "US/Mountain", as seen in an old complaint
from Josh Berkus.)
If /etc/localtime doesn't exist, or isn't a symlink, or we can't make
sense of its contents, or the contents match a zone we know but that
zone doesn't match the observed behavior of localtime(), fall back to
the brute-force search.
Also, tweak initdb so that it prints the zone name it selected.
In passing, replace the last few references to the "Olson" database in
code comments with "IANA", as that's been our preferred term since
commit b2cbced9e.
Back-patch of commit 23bd3cec6. The original intention was to not
back-patch, since this can result in cosmetic behavioral changes ---
for example, on my own workstation initdb now chooses "America/New_York",
where it used to prefer "US/Eastern" which is equivalent and shorter.
However, our hand has been more or less forced by tzdb update 2019a,
which made the "UCT" zone fully equivalent to "UTC". Our old code
now prefers "UCT" on the grounds of it being alphabetically first,
and that's making nobody happy. Choosing the alias indicated by
/etc/localtime is a more defensible behavior. (Users who don't like
the results can always force the decision by setting the TZ environment
variable before running initdb.)
Patch by me, per a suggestion from Robert Haas; review by Michael Paquier
Discussion: https://postgr.es/m/7408.1525812528@sss.pgh.pa.us
Discussion: https://postgr.es/m/20190604085735.GD24018@msg.df7cb.de
pgtls_read_pending is declared to return bool, but what the underlying
SSL_pending function returns is a count of available bytes.
This is actually somewhat harmless if we're using C99 bools, but in
the back branches it's a live bug: if the available-bytes count happened
to be a multiple of 256, it would get converted to a zero char value.
On machines where char is signed, counts of 128 and up could misbehave
as well. The net effect is that when using SSL, libpq might block
waiting for data even though some has already been received.
Broken by careless refactoring in commit 4e86f1b16, so back-patch
to 9.5 where that came in.
Per bug #15802 from David Binderman.
Discussion: https://postgr.es/m/15802-f0911a97f0346526@postgresql.org
The Bison documentation clearly states that a semicolon is required
after every grammar rule, and our scripts that generate ecpg's
grammar from the backend's implicitly assumed this is true. But it
turns out that only ancient versions of Bison actually enforce that.
There have been a couple of rules without trailing semicolons in
gram.y for some time, and as a consequence, ecpg's grammar was faulty
and produced wrong output for the affected statements.
To fix, add the missing semis, and add some cross-checks to ecpg's
scripts so that they'll bleat if we mess this up again.
The cases that were broken were:
* "SET variable = DEFAULT" (but not "SET variable TO DEFAULT"),
as well as allied syntaxes such as ALTER SYSTEM SET ... DEFAULT.
These produced syntactically invalid output that the server
would reject.
* Multiple type names in DROP TYPE/DOMAIN commands. Only the
first type name would be listed in the emitted command.
Per report from Daisuke Higuchi. Back-patch to all supported versions.
Discussion: https://postgr.es/m/1803D792815FC24D871C00D17AE95905DB51CE@g01jpexmbkw24
A report from Andrew Dunstan showed that an ecpglib breakage that
causes repeated query failures could lead to infinite loops in some
ecpg test scripts, because they contain "while(1)" loops with no
exit condition other than successful test completion. That might
be all right for manual testing, but it seems entirely unacceptable
for automated test environments such as our buildfarm. We don't
want buildfarm owners to have to intervene manually when a test
goes wrong.
To fix, just change all those while(1) loops to exit after at most
100 iterations (which is more than any of them expect to iterate).
This seems sufficient since we'd see discrepancies in the test output
if any loop executed the wrong number of times.
I tested this by dint of intentionally breaking ecpg_do_prologue
to always fail, and verifying that the tests still got to completion.
Back-patch to all supported branches, since the whole point of this
exercise is to protect the buildfarm against future mistakes.
Discussion: https://postgr.es/m/18693.1548302004@sss.pgh.pa.us
Apparently, some builds of MinGW contain a version of
_configthreadlocale() that always returns -1, indicating failure.
Rather than treating that as a curl-up-and-die condition, soldier on
as though the function didn't exist. This leaves us without thread
safety on such MinGW versions, but we didn't have it anyway.
Discussion: https://postgr.es/m/d06a16bc-52d6-9f0d-2379-21242d7dbe81@2ndQuadrant.com
ecpglib attempts to force the LC_NUMERIC locale to "C" while reading
server output, to avoid problems with strtod() and related functions.
Historically it's just issued setlocale() calls to do that, but that
has major problems if we're in a threaded application. setlocale()
itself is not required by POSIX to be thread-safe (and indeed is not,
on recent OpenBSD). Moreover, its effects are process-wide, so that
we could cause unexpected results in other threads, or another thread
could change our setting.
On platforms having uselocale(), which is required by POSIX:2008,
we can avoid these problems by using uselocale() instead. Windows
goes its own way as usual, but we can make it safe by using
_configthreadlocale(). Platforms having neither continue to use the
old code, but that should be pretty much nobody among current systems.
(Subsequent buildfarm results show that recent NetBSD versions still
lack uselocale(), but it's not a big problem because they also do not
support non-"C" settings for LC_NUMERIC.)
Back-patch of commits 8eb4a9312 and ee27584c4.
Michael Meskes and Tom Lane; thanks also to Takayuki Tsunakawa.
Discussion: https://postgr.es/m/31420.1547783697@sss.pgh.pa.us
In commit ecfd55795, I removed sqlda.c's checks for ndigits != 0 on the
grounds that we should duplicate the state of the numeric value's digit
buffer even when all the digits are zeroes. However, that still isn't
quite right, because another possible state of the digit buffer is
buf == digits == NULL (this occurs for a NaN). As the code now stands,
it'll invoke memcpy with a NULL source address and zero bytecount,
which we know a few platforms crash on. Hence, reinstate the no-copy
short-circuit, but make it test specifically for buf != NULL rather than
some other condition. In hindsight, the ndigits test (added by commit
f2ae9f9c3) was almost certainly meant to fix the NaN case not the
all-zeroes case as the associated thread alleged.
As before, back-patch to all supported versions.
Discussion: https://postgr.es/m/1803D792815FC24D871C00D17AE95905C71161@g01jpexmbkw24
Numeric values with leading zeroes were incorrectly copied into a
SQLDA (SQL Descriptor Area), leading to wrong results in ECPG programs.
Report and patch by Daisuke Higuchi. Back-patch to all supported
versions.
Discussion: https://postgr.es/m/1803D792815FC24D871C00D17AE95905C71161@g01jpexmbkw24
PQnotifies() is defined to just process already-read data, not try to read
any more from the socket. (This is a debatable decision, perhaps, but I'm
hesitant to change longstanding library behavior.) The documentation has
long recommended calling PQconsumeInput() before PQnotifies() to ensure
that any already-arrived message would get absorbed and processed.
However, psql did not get that memo, which explains why it's not very
reliable about reporting notifications promptly.
Also, most (not quite all) callers called PQconsumeInput() just once before
a PQnotifies() loop. Taking this recommendation seriously implies that we
should do PQconsumeInput() before each call. This is more important now
that we have "payload" strings in notification messages than it was before;
that increases the probability of having more than one packet's worth
of notify messages. Hence, adjust code as well as documentation examples
to do it like that.
Back-patch to 9.5 to match related server fixes. In principle we could
probably go back further with these changes, but given lack of field
complaints I doubt it's worthwhile.
Discussion: https://postgr.es/m/CAOYf6ec-TmRYjKBXLLaGaB-jrd=mjG1Hzn1a1wufUAR39PQYhw@mail.gmail.com
When we removed the ecpg-specific versions, we also removed the
"(PostgreSQL)" from the --version output, which we show in other
programs.
Reported-by: Ioseph Kim <pgsql-kr@postgresql.kr>
Ensure that pg_saslprep() initializes its output argument to NULL in
all failure paths, and then remove the redundant initialization that
some (not all) of its callers did. This does not fix any live bug,
but it reduces the odds of future bugs of omission.
Also add a comment about why the existing failure-path coding is
adequate.
Back-patch so as to keep the function's API consistent across branches,
again to forestall future bug introduction.
Patch by me, reviewed by Michael Paquier
Discussion: https://postgr.es/m/16558.1536407783@sss.pgh.pa.us
Commits c6b3c939b (which fixed the precedence of >=, <=, <> operators)
and 865f14a2d (which added support for the standard => notation for
named arguments) created a class of lexer tokens which look like
multi-character operators but which have their own token IDs distinct
from Op. However, longest-match rules meant that following any of
these tokens with another operator character, as in (1<>-1), would
cause them to be incorrectly returned as Op.
The error here isn't immediately obvious, because the parser would
usually still find the correct operator via the Op token, but there
were more subtle problems:
1. If immediately followed by a comment or +-, >= <= <> would be given
the old precedence of Op rather than the correct new precedence;
2. If followed by a comment, != would be returned as Op rather than as
NOT_EQUAL, causing it not to be found at all;
3. If followed by a comment or +-, the => token for named arguments
would be lexed as Op, causing the argument to be mis-parsed as a
simple expression, usually causing an error.
Fix by explicitly checking for the operators in the {operator} code
block in addition to all the existing special cases there.
Backpatch to 9.5 where the problem was introduced.
Analysis and patch by me; review by Tom Lane.
Discussion: https://postgr.es/m/87va851ppl.fsf@news-spur.riddles.org.uk
The lexer's handling of operators contained an O(N^3) hazard when
dealing with long strings of + or - characters; it seems hard to
prevent this case from being O(N^2), but the additional N multiplier
was not needed.
Backpatch all the way since this has been there since 7.x, and it
presents at least a mild hazard in that trying to do Bind, PREPARE or
EXPLAIN on a hostile query could take excessive time (without
honouring cancels or timeouts) even if the query was never executed.
Historically, we looked up the target hostname in connectDBStart, so that
PQconnectPoll did not need to do DNS name resolution. The patches that
added multiple-target-host support to libpq preserved this division of
labor; but it's really nonsensical now, because it means that if any one
of the target hosts fails to resolve in DNS, the connection fails. That
negates the no-single-point-of-failure goal of the feature. Additionally,
DNS lookups aren't exactly cheap, but the code did them all even if the
first connection attempt succeeds.
Hence, rearrange so that PQconnectPoll does the lookups, and only looks
up a hostname when it's time to try that host. This does mean that
PQconnectPoll could block on a DNS lookup --- but if you wanted to avoid
that, you should be using hostaddr, as the documentation has always
specified. It seems fairly unlikely that any applications would really
care whether the lookup occurs inside PQconnectStart or PQconnectPoll.
In addition to calling out that fact explicitly, do some other minor
wordsmithing in the docs around the multiple-target-host feature.
Since this seems like a bug in the multiple-target-host feature,
backpatch to v10 where that was introduced. In the back branches,
avoid moving any existing fields of struct pg_conn, just in case
any third-party code is looking into that struct.
Tom Lane, reviewed by Fabien Coelho
Discussion: https://postgr.es/m/4913.1533827102@sss.pgh.pa.us