Causing a core dump on out-of-memory seems pretty unfriendly,
and surely is far outside the expected behavior of a general-purpose
library. Just print an error message (as we did already) and return.
These functions unfortunately don't have an error return convention,
but code using them is probably just looking for a quick-n-dirty
print method and wouldn't bother to check anyway.
Although these functions are semi-deprecated, it still seems
appropriate to back-patch this. In passing, also back-patch
b90e6cef1, just to reduce cosmetic differences between the
branches.
Discussion: https://postgr.es/m/3122443.1624735363@sss.pgh.pa.us
Our uses of gss_display_status() and gss_display_name() assumed
that the gss_buffer_desc strings returned by those functions are
null-terminated. It appears that they generally are, given the
lack of field complaints up to now. However, the available
documentation does not promise this, and some man pages
for gss_display_status() show examples that rely on the
gss_buffer_desc.length field instead of expecting null
termination. Also, we now have a report that on some
implementations, clang's address sanitizer is of the opinion
that the byte after the specified length is undefined.
Hence, change the code to rely on the length field instead.
This might well be cosmetic rather than fixing any real bug, but
it's hard to be sure, so back-patch to all supported branches.
While here, also back-patch the v12 changes that made pg_GSS_error
deal honestly with multiple messages available from
gss_display_status.
Per report from Sudheer H R.
Discussion: https://postgr.es/m/5372B6D4-8276-42C0-B8FB-BD0918826FC3@tekenlight.com
An incorrectly-encoded multibyte character near the end of a string
could cause various processing loops to run past the string's
terminating NUL, with results ranging from no detectable issue to
a program crash, depending on what happens to be in the following
memory.
This isn't an issue in the server, because we take care to verify
the encoding of strings before doing any interesting processing
on them. However, that lack of care leaked into client-side code
which shouldn't assume that anyone has validated the encoding of
its input.
Although this is certainly a bug worth fixing, the PG security team
elected not to regard it as a security issue, primarily because
any untrusted text should be sanitized by PQescapeLiteral or
the like before being incorporated into a SQL or psql command.
(If an app fails to do so, the same technique can be used to
cause SQL injection, with probably much more dire consequences
than a mere client-program crash.) Those functions were already
made proof against this class of problem, cf CVE-2006-2313.
To fix, invent PQmblenBounded() which is like PQmblen() except it
won't return more than the number of bytes remaining in the string.
In HEAD we can make this a new libpq function, as PQmblen() is.
It seems imprudent to change libpq's API in stable branches though,
so in the back branches define PQmblenBounded as a macro in the files
that need it. (Note that just changing PQmblen's behavior would not
be a good idea; notably, it would completely break the escaping
functions' defense against this exact problem. So we just want a
version for those callers that don't have any better way of handling
this issue.)
Per private report from houjingyi. Back-patch to all supported branches.
Commit 92785dac2 copied some logic related to advancement of inStart
from pqParseInput3 into getRowDescriptions and getAnotherTuple,
because it wanted to allow user-defined row processor callbacks to
potentially longjmp out of the library, and inStart would have to be
updated before that happened to avoid an infinite loop. We later
decided that that API was impossibly fragile and reverted it, but
we didn't undo all of the related code changes, and this bit of
messiness survived. Undo it now so that there's just one place in
pqParseInput3's processing where inStart is advanced; this will
simplify addition of better tracing support.
getParamDescriptions had grown similar processing somewhere along
the way (not in 92785dac2; I didn't track down just when), but it's
actually buggy because its handling of corrupt-message cases seems to
have been copied from the v2 logic where we lacked a known message
length. The cases where we "goto not_enough_data" should not simply
return EOF, because then we won't consume the message, potentially
creating an infinite loop. That situation now represents a
definitively corrupt message, and we should report it as such.
Although no field reports of getParamDescriptions getting stuck in
a loop have been seen, it seems appropriate to back-patch that fix.
I chose to back-patch all of this to keep the logic looking more alike
in supported branches.
Discussion: https://postgr.es/m/2217283.1615411989@sss.pgh.pa.us
We had "short *mdy" in the extern declarations, but "short mdy[3]"
in the actual function definitions. Per C99 these are equivalent,
but recent versions of gcc have started to issue warnings about
the inconsistency. Clean it up before the warnings get any more
widespread.
Back-patch, in case anyone wants to build older PG versions with
bleeding-edge compilers.
Discussion: https://postgr.es/m/2401575.1611764534@sss.pgh.pa.us
The critical issue fixed here is that if a GSSAPI-encrypted connection
is successfully made, pqsecure_open_gss() cleared conn->allow_ssl_try,
as an admittedly-hacky way of preventing us from then trying to tunnel
SSL encryption over the already-encrypted connection. The problem
with that is that if we abandon the GSSAPI connection because of a
failure during authentication, we would not attempt SSL encryption
in the next try with the same server. This can lead to unexpected
connection failure, or silently getting a non-encrypted connection
where an encrypted one is expected.
Fortunately, we'd only manage to make a GSSAPI-encrypted connection
if both client and server hold valid tickets in the same Kerberos
infrastructure, which is a relatively uncommon environment.
Nonetheless this is a very nasty bug with potential security
consequences. To fix, don't reset the flag, instead adding a
check for conn->gssenc being already true when deciding whether
to try to initiate SSL.
While here, fix some lesser issues in libpq's GSSAPI code:
* Use the need_new_connection stanza when dropping an attempted
GSSAPI connection, instead of partially duplicating that code.
The consequences of this are pretty minor: AFAICS it could only
lead to auth_req_received or password_needed remaining set when
they shouldn't, which is not too harmful.
* Fix pg_GSS_error() to not repeat the "mprefix" it's given multiple
times, and to notice any failure return from gss_display_status().
* Avoid gratuitous dependency on NI_MAXHOST in
pg_GSS_load_servicename().
Per report from Mikael Gustavsson. Back-patch to v12 where
this code was introduced.
Discussion: https://postgr.es/m/e5b0b6ed05764324a2f3fe7acfc766d5@smhi.se
We must not set the "done" flag until after we've executed the
initialization function. Otherwise, other threads can fall through
the initial unlocked test before initialization is really complete.
This has been seen to cause rare failures of ecpg's thread/descriptor
test, and it could presumably cause other sorts of misbehavior in
threaded ECPG-using applications, since ecpglib relies on
pthread_once() in several places.
Diagnosis and patch by me, based on investigation by Alexander Lakhin.
Back-patch to all supported branches (the bug dates to 2007).
Discussion: https://postgr.es/m/16685-d6cd241872c101d3@postgresql.org
The Windows documentation insists that every WSAStartup call should
have a matching WSACleanup call. However, if that ever had actual
relevance, it wasn't in this century. Every remotely-modern Windows
kernel is capable of cleaning up when a process exits without doing
that, and must be so to avoid resource leaks in case of a process
crash. Moreover, Postgres backends have done WSAStartup without
WSACleanup since commit 4cdf51e64 in 2004, and we've never seen any
indication of a problem with that.
libpq's habit of doing WSAStartup during connection start and
WSACleanup during shutdown is also rather inefficient, since a
series of non-overlapping connection requests leads to repeated,
quite expensive DLL unload/reload cycles. We document a workaround
for that (having the application call WSAStartup for itself), but
that's just a kluge. It's also worth noting that it's far from
uncommon for applications to exit without doing PQfinish, and
we've not heard reports of trouble from that either.
However, the real reason for acting on this is that recent
experiments by Alexander Lakhin show that calling WSACleanup
during PQfinish is triggering the symptom we occasionally see
that a process using libpq fails to emit expected stdio output.
Therefore, let's change libpq so that it calls WSAStartup only
once per process, during the first connection attempt, and never
calls WSACleanup at all.
While at it, get rid of the only other WSACleanup call in our code
tree, in pg_dump/parallel.c; that presumably is equally useless.
Back-patch of HEAD commit 7d00a6b2d.
Discussion: https://postgr.es/m/ac976d8c-03df-d6b8-025c-15a2de8d9af1@postgrespro.ru
Historically there's been a hard-wired assumption here that no line of
a .pgpass file could be as long as NAMEDATALEN*5 bytes. That's a bit
shaky to start off with, because (a) there's no reason to suppose that
host names fit in NAMEDATALEN, and (b) this figure fails to allow for
backslash escape characters. However, it fails completely if someone
wants to use a very long password, and we're now hearing reports of
people wanting to use "security tokens" that can run up to several
hundred bytes. Another angle is that the file is specified to allow
comment lines, but there's no reason to assume that long comment lines
aren't possible.
Rather than guessing at what might be a more suitable limit, let's
replace the fixed-size buffer with an expansible PQExpBuffer. That
adds one malloc/free cycle to the typical use-case, but that's surely
pretty cheap relative to the I/O this code has to do.
Also, add TAP test cases to exercise this code, because there was no
test coverage before.
This reverts most of commit 2eb3bc588, as there's no longer a need for
a warning message about overlength .pgpass lines. (I kept the explicit
check for comment lines, though.)
In HEAD and v13, this also fixes an oversight in 74a308cf5: there's not
much point in explicit_bzero'ing the line buffer if we only do so in two
of the three exit paths.
Back-patch to all supported branches, except that the test case only
goes back to v10 where src/test/authentication/ was added.
Discussion: https://postgr.es/m/4187382.1598909041@sss.pgh.pa.us
A couple of test cases had connect_timeout=14, a value that seems
to have been plucked from a hat. While it's more than sufficient
for normal cases, slow/overloaded buildfarm machines can get a timeout
failure here, as per recent report from "sungazer". Increase to 180
seconds, which is in line with our typical timeouts elsewhere in
the regression tests.
Back-patch to 9.6; the code looks different in 9.5, and this doesn't
seem to be quite worth the effort to adapt to that.
Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2020-08-04%2007%3A12%3A22
Some code paths dedicated to bytea used the structure for varchar. This
did not lead to any actual bugs, as bytea and varchar have the same
definition, but it could become a trap if one of these definitions
changes for a new feature or a bug fix.
Issue introduced by 050710b.
Author: Shenhao Wang
Reviewed-by: Vignesh C, Michael Paquier
Discussion: https://postgr.es/m/07ac7dee1efc44f99d7f53a074420177@G08CNEXMBPEKD06.g08.fujitsu.local
Backpatch-through: 12
GSS-related resources should be cleaned up in pqDropConnection,
not freePGconn, else the wrong things happen when resetting
a connection or trying to switch to a different server.
It's also critical to reset conn->gssenc there.
During connection setup, initialize conn->try_gss at the correct
place, else switching to a different server won't work right.
Remove now-redundant cleanup of GSS resources around one (and, for
some reason, only one) pqDropConnection call in connectDBStart.
Per report from Kyotaro Horiguchi that psql would freeze up,
rather than successfully resetting a GSS-encrypted connection
after a server restart.
This is YA oversight in commit b0b39f72b, so back-patch to v12.
Discussion: https://postgr.es/m/20200710.173803.435804731896516388.horikyota.ntt@gmail.com
Even when we've concluded that we have a hard write failure on the
socket, we should continue to try to read data. This gives us an
opportunity to collect any final error message that the backend might
have sent before closing the connection; moreover it is the job of
pqReadData not pqSendSome to close the socket once EOF is detected.
Due to an oversight in 1f39a1c06, pqSendSome failed to try to collect
data in the case where we'd already set write_failed. The problem was
masked for ordinary query operations (which really only make one write
attempt anyway), but COPY to the server would continue to send data
indefinitely after a mid-COPY connection loss.
Hence, add pqReadData calls into the paths where pqSendSome drops data
because of write_failed. If we've lost the connection, this will
eventually result in closing the socket and setting CONNECTION_BAD,
which will cause PQputline and siblings to report failure, allowing
the application to terminate the COPY sooner. (Basically this restores
what happened before 1f39a1c06.)
There are related issues that this does not solve; for example, if the
backend sends an error but doesn't drop the connection, we did and
still will keep pumping COPY data as long as the application sends it.
Fixing that will require application-visible behavior changes though,
and anyway it's an ancient behavior that we've had few complaints about.
For now I'm just trying to fix the regression from 1f39a1c06.
Per a complaint from Andres Freund. Back-patch into v12 where
1f39a1c06 came in.
Discussion: https://postgr.es/m/20200603201242.ofvm4jztpqytwfye@alap3.anarazel.de
Both the backend and libpq leaked buffers containing encrypted data
to be transmitted, so that the process size would grow roughly as
the total amount of data sent.
There were also far-less-critical leaks of the same sort in GSSAPI
session establishment.
Oversight in commit b0b39f72b, which I failed to notice while
reviewing the code in 2c0cdc818.
Per complaint from pmc@citylink.
Back-patch to v12 where this code was introduced.
Discussion: https://postgr.es/m/20200504115649.GA77072@gate.oper.dinoex.org
Checking if Subject Alternative Names (SANs) from a certificate match
with the hostname connected to leaked memory after each lookup done.
This is broken since acd08d7 that added support for SANs in SSL
certificates, so backpatch down to 9.5.
Author: Roman Peshkurov
Reviewed-by: Hamid Akhtar, Michael Paquier, David Steele
Discussion: https://postgr.es/m/CALLDf-pZ-E3mjxd5=bnHsDu9zHEOnpgPgdnO84E2RuwMCjjyPw@mail.gmail.com
Backpatch-through: 9.5
channel_binding's longest allowed value is not "7", it is actually "8".
gssencmode also got that wrong.
A similar mistake has been fixed as of f4051e3.
Backpatch down to v12, where gssencmode has been introduced.
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20200128053633.GD1552@paquier.xyz
Backpatch-through: 12
ecpg_build_params() would crash on a null pointer dereference if
realloc() failed, due to updating the persistent "stmt" struct
too aggressively. (Even without the crash, this would've leaked
the old storage that we were trying to realloc.)
Per Coverity. This seems to have been broken in commit 0cc050794,
so back-patch into v12.
Fix assorted bugs in handling of non-blocking I/O when using GSSAPI
encryption. The encryption layer could return the wrong status
information to its caller, resulting in effectively dropping some data
(or possibly in aborting a not-broken connection), or in a "livelock"
situation where data remains to be sent but the upper layers think
transmission is done and just go to sleep. There were multiple small
thinkos contributing to that, as well as one big one (failure to think
through what to do when a send fails after having already transmitted
data). Note that these errors could cause failures whether the client
application asked for non-blocking I/O or not, since both libpq and
the backend always run things in non-block mode at this level.
Also get rid of use of static variables for GSSAPI inside libpq;
that's entirely not okay given that multiple connections could be
open at once inside a single client process.
Also adjust a bunch of random small discrepancies between the frontend
and backend versions of the send/receive functions -- except for error
handling, they should be identical, and now they are.
Also extend the Kerberos TAP tests to exercise cases where nontrivial
amounts of data need to be pushed through encryption. Before, those
tests didn't provide any useful coverage at all for the cases of
interest here. (They still might not, depending on timing, but at
least there's a chance.)
Per complaint from pmc@citylink and subsequent investigation.
Back-patch to v12 where this code was introduced.
Discussion: https://postgr.es/m/20200109181822.GA74698@gate.oper.dinoex.org
We realized years ago that it's better for libpq to accept all
connection parameters syntactically, even if some are ignored or
restricted due to lack of the feature in a particular build.
However, that lesson from the SSL support was for some reason never
applied to the GSSAPI support. This is causing various buildfarm
members to have problems with a test case added by commit 6136e94dc,
and it's just a bad idea from a user-experience standpoint anyway,
so fix it.
While at it, fix some places where parameter-related infrastructure
was added with the aid of a dartboard, or perhaps with the aid of
the anti-pattern "add new stuff at the end". It should be safe
to rearrange the contents of struct pg_conn even in released
branches, since that's private to libpq (and we'd have to move
some fields in some builds to fix this, anyway).
Back-patch to all supported branches.
Discussion: https://postgr.es/m/11297.1576868677@sss.pgh.pa.us
When using %b or %B patterns to format a date, the code was simply using
tm_mon as an index into array of month names. But that is wrong, because
tm_mon is 1-based, while array indexes are 0-based. The result is we
either use name of the next month, or a segfault (for December).
Fix by subtracting 1 from tm_mon for both patterns, and add a regression
test triggering the issue. Backpatch to all supported versions (the bug
is there far longer, since at least 2003).
Reported-by: Paul Spencer
Backpatch-through: 9.4
Discussion: https://postgr.es/m/16143-0d861eb8688d3fef%40postgresql.org
Declaring this in the client-visible header ecpglib.h was a pretty
poor decision. It's not meant to be application-callable (and if
it was, putting it outside the extern "C" { ... } wrapper means
that C++ clients would fail to call it). And the declaration would
not even compile for a client, anyway, since it would not have the
macro pg_attribute_format_arg(). Fortunately, it seems that no
clients have tried to include this header with ENABLE_NLS defined,
or we'd have gotten complaints about that. But we have no business
putting such a restriction on client code.
Move the declaration to ecpglib_extern.h, since in fact nothing
outside src/interfaces/ecpg/ecpglib/ needs to call it.
The practical effect of this is just that clients can now safely
#include ecpglib.h while having ENABLE_NLS defined, but that seems
like enough of a reason to back-patch it.
Discussion: https://postgr.es/m/20590.1573069709@sss.pgh.pa.us
pgtypeslib_extern.h contained fallback definitions of "bool", "FALSE",
and "TRUE". The latter two are just plain unused, and have been for
awhile. The former came into play only if there wasn't a macro
definition of "bool", which is true only if we aren't using <stdbool.h>.
However, it then defined bool as "char"; since commit d26a810eb that
conflicts with c.h's desire to use "unsigned char". We'd missed seeing
any bad effects of that due to accidental header inclusion order choices,
but dddf4cdc3 exposed that it was problematic.
To fix, let's just get rid of these definitions. They should not be
needed because everyplace in Postgres should be relying on c.h to
provide a definition for type bool. (Note that despite its name,
pgtypeslib_extern.h isn't exposed to any outside code; we don't
install it.)
This doesn't fully resolve the issue, because ecpglib.h is doing
similar things, but that seems to require more thought to fix.
Back-patch to v12 where d26a810eb came in, to forestall any unpleasant
surprises from future back-patched bug fixes.
Discussion: https://postgr.es/m/CAA4eK1LmaKO7Du9M9Lo=kxGU8sB6aL8fa3sF6z6d5yYYVe3BuQ@mail.gmail.com
A check was redundant. While on it, add an assertion to make sure that
the parsing routine is never called with a NULL input. All the code
paths currently calling the parsing routine are careful with NULL inputs
already, but future callers may forget that.
Reported-by: Peter Eisentraut, Lars Kanis
Discussion: https://postgr.es/m/ec64956b-4597-56b6-c3db-457d15250fe4@2ndquadrant.com
Backpatch-through: 12
The logic was correctly detecting a parsing failure, but the parsing
error did not get reported back to the client properly.
Reported-by: Ed Morley
Author: Lars Kanis
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/a9b4cbd7-4ecb-06b2-ebd7-1739bbff3217@greiz-reinsdorf.de
Backpatch-through: 12
Commit e7a2217 has introduced stricter checks for integer values in
connection parameters for libpq. However this failed to correctly check
after trailing whitespaces, while leading whitespaces were discarded per
the use of strtol(3). This fixes and refactors the parsing logic to
handle both cases consistently. Note that trying to restrict the use of
trailing whitespaces can easily break connection strings like in ECPG
regression tests (these have allowed me to catch the parsing bug with
connect_timeout).
Author: Michael Paquier
Reviewed-by: Lars Kanis
Discussion: https://postgr.es/m/a9b4cbd7-4ecb-06b2-ebd7-1739bbff3217@greiz-reinsdorf.de
Backpatch-through: 12
This reverts commit f7ab80285. Per discussion, we can't remove an
exported symbol without a SONAME bump, which we don't want to do.
In particular that breaks usage of current libpq.so with pre-9.3
versions of psql etc, which need libpq to export pqsignal().
As noted in that commit message, exporting the symbol from libpgport.a
won't work reliably; but actually we don't want to export src/port's
implementation anyway. Any pre-9.3 client is going to be expecting the
definition that pqsignal() had before 9.3, which was that it didn't
set SA_RESTART for SIGALRM. Hence, put back pqsignal() in a separate
source file in src/interfaces/libpq, and give it the old semantics.
Back-patch to v12.
Discussion: https://postgr.es/m/E1g5vmT-0003K1-6S@gemulon.postgresql.org
This reverts commit bd7c95f0c1,
along with assorted follow-on fixes. There are some questions
about the definition and implementation of that statement, and
we don't have time to resolve them before v13 release. Rather
than ship the feature and then have backwards-compatibility
concerns constraining any redesign, let's remove it for now
and try again later.
Discussion: https://postgr.es/m/TY2PR01MB2443EC8286995378AEB7D9F8F5B10@TY2PR01MB2443.jpnprd01.prod.outlook.com
Make the error messages around GSSAPI encryption a bit clearer. Tweak
some messages to avoid plural problems.
Also make a code change for clarity. Using "conf" for "confidential"
is quite confusing. Using "conf_state" is perhaps not much better but
that's what the GSSAPI documentation uses, so there is at least some
hope of understanding it.