pg_regress reported "Unix socket" as the default location whenever
HAVE_UNIX_SOCKETS is defined. However, that's not been accurate
on Windows since 8f3ec75de. Update this logic to match what libpq
actually does now.
This is just cosmetic, but still it's potentially misleading.
Back-patch to v13 where 8f3ec75de came in.
Discussion: https://postgr.es/m/3894060.1646415641@sss.pgh.pa.us
PQcancel() is supposed to be safe to call from a signal handler,
and indeed psql uses it that way. All of the library functions
it uses are specified to be async-signal-safe by POSIX ...
except for strerror. Neither plain strerror nor strerror_r
are considered safe. When this code was written, back in the
dark ages, we probably figured "oh, strerror will just index
into a constant array of strings" ... but in any locale except C,
that's unlikely to be true. Probably the reason we've not heard
complaints is that (a) this error-handling code is unlikely to be
reached in normal use, and (b) in many scenarios, localized error
strings would already have been loaded, after which maybe it's
safe to call strerror here. Still, this is clearly unacceptable.
The best we can do without relying on strerror is to print the
decimal value of errno, so make it do that instead. (This is
probably not much loss of user-friendliness, given that it is
hard to get a failure here.)
Back-patch to all supported branches.
Discussion: https://postgr.es/m/2937814.1641960929@sss.pgh.pa.us
libpq collects up to a bufferload of data whenever it reads data from
the socket. When SSL or GSS encryption is requested during startup,
any additional data received with the server's yes-or-no reply
remained in the buffer, and would be treated as already-decrypted data
once the encryption handshake completed. Thus, a man-in-the-middle
with the ability to inject data into the TCP connection could stuff
some cleartext data into the start of a supposedly encryption-protected
database session.
This could probably be abused to inject faked responses to the
client's first few queries, although other details of libpq's behavior
make that harder than it sounds. A different line of attack is to
exfiltrate the client's password, or other sensitive data that might
be sent early in the session. That has been shown to be possible with
a server vulnerable to CVE-2021-23214.
To fix, throw a protocol-violation error if the internal buffer
is not empty after the encryption handshake.
Our thanks to Jacob Champion for reporting this problem.
Security: CVE-2021-23222
Commits ffa2e4670 and 52a10224e caused libpq's connection-establishment
functions to usually leave a nonempty string in the connection's
errorMessage buffer, even after a successful connection. While that
was intentional on my part, more sober reflection says that it wasn't
a great idea: the string would be a bit confusing. Also this broke at
least one application that checked for connection success by examining
the errorMessage, instead of using PQstatus() as documented. Let's
clear the buffer at success exit, restoring the pre-v14 behavior.
Discussion: https://postgr.es/m/4170264.1620321747@sss.pgh.pa.us
Commit ffa2e4670 changed libpq so that multiple error reports
occurring during one operation (a connection attempt or query)
are accumulated in conn->errorMessage, where before new ones
usually replaced any prior error. At least in theory, that makes
us more vulnerable to running out of memory for the errorMessage
buffer. If it did happen, the user would be left with just an
empty-string error report, which is pretty unhelpful.
We can improve this by relying on pqexpbuffer.c's existing "broken
buffer" convention to track whether we've hit OOM for the current
operation's error string, and then substituting a constant "out of
memory" string in the small number of places where the errorMessage
is read out.
While at it, apply the same method to similar OOM cases in
pqInternalNotice and pqGetErrorNotice3.
Back-patch to v14 where ffa2e4670 came in. In principle this could
go back further; but in view of the lack of field reports, the
hazard seems negligible in older branches.
Discussion: https://postgr.es/m/530153.1627425648@sss.pgh.pa.us
Commit c0cb87fbb unwisely introduced a dependency on the StringInfo
machinery in fe-connect.c. We must not use that in libpq, because
it will do a summary exit(1) if it hits OOM, and that is not
appropriate behavior for a general-purpose library. The goal of
allowing arbitrary line lengths in service files doesn't seem like
it's worth a lot of effort, so revert back to the previous method
of using a stack-allocated buffer and failing on buffer overflow.
This isn't an exact revert though. I kept that patch's refactoring
to have a single exit path, as that seems cleaner than having each
error path know what to do to clean up. Also, I made the fixed-size
buffer 1024 bytes not 256, just to push off the need for an expandable
buffer some more.
There is more to do here; in particular the lack of any mechanical
check for this type of mistake now seems pretty hazardous. But this
fix gets us back to the level of robustness we had in v13, anyway.
Discussion: https://postgr.es/m/daeb22ec6ca8ef61e94d766a9b35fb03cabed38e.camel@vmware.com
Instead, put them in via a format placeholder. This reduces the
number of distinct translatable messages and also reduces the chances
of typos during translation. We already did this for the system call
arguments in a number of cases, so this is just the same thing taken a
bit further.
Discussion: https://www.postgresql.org/message-id/flat/92d6f545-5102-65d8-3c87-489f71ea0a37%40enterprisedb.com
By default, have libpq set the TLS extension "Server Name Indication" (SNI).
This allows an SNI-aware SSL proxy to route connections. (This
requires a proxy that is aware of the PostgreSQL protocol, not just
any SSL proxy.)
In the future, this could also allow the server to use different SSL
certificates for different host specifications. (That would require
new server functionality. This would be the client-side functionality
for that.)
Since SNI makes the host name appear in cleartext in the network
traffic, this might be undesirable in some cases. Therefore, also add
a libpq connection option "sslsni" to turn it off.
Discussion: https://www.postgresql.org/message-id/flat/7289d5eb-62a5-a732-c3b9-438cee2cb709%40enterprisedb.com
Failing to do this can cause a crash, and I suspect is what has happened
with a buildfarm member reporting mysterious failures.
This is an ancient bug, but I'm not backpatching since evidently nobody
cares about PQtrace in older releases.
Discussion: https://postgr.es/m/3333908.1617227066@sss.pgh.pa.us
Transform the PQtrace output format from its ancient (and mostly
useless) byte-level output format to a logical-message-level output,
making it much more usable. This implementation allows the printing
code to be written (as it indeed was) by looking at the protocol
documentation, which gives more confidence that the three (docs, trace
code and actual code) actually match.
Author: 岩田 彩 (Aya Iwata) <iwata.aya@fujitsu.com>
Reviewed-by: 綱川 貴之 (Takayuki Tsunakawa) <tsunakawa.takay@fujitsu.com>
Reviewed-by: Kirk Jamison <k.jamison@fujitsu.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: 黒田 隼人 (Hayato Kuroda) <kuroda.hayato@fujitsu.com>
Reviewed-by: "Nagaura, Ryohei" <nagaura.ryohei@jp.fujitsu.com>
Reviewed-by: Ryo Matsumura <matsumura.ryo@fujitsu.com>
Reviewed-by: Greg Nancarrow <gregn4422@gmail.com>
Reviewed-by: Jim Doty <jdoty@pivotal.io>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/71E660EB361DF14299875B198D4CE5423DE3FBA4@g01jpexmbkw25
Pipeline mode in libpq lets an application avoid the Sync messages in
the FE/BE protocol that are implicit in the old libpq API after each
query. The application can then insert Sync at its leisure with a new
libpq function PQpipelineSync. This can lead to substantial reductions
in query latency.
Co-authored-by: Craig Ringer <craig.ringer@enterprisedb.com>
Co-authored-by: Matthieu Garrigues <matthieu.garrigues@gmail.com>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Aya Iwata <iwata.aya@jp.fujitsu.com>
Reviewed-by: Daniel Vérité <daniel@manitou-mail.org>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Kirk Jamison <k.jamison@fujitsu.com>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Nikhil Sontakke <nikhils@2ndquadrant.com>
Reviewed-by: Vaishnavi Prabakaran <VaishnaviP@fast.au.fujitsu.com>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Discussion: https://postgr.es/m/CAMsr+YFUjJytRyV4J-16bEoiZyH=4nj+sQ7JP9ajwz=B4dMMZw@mail.gmail.com
Discussion: https://postgr.es/m/CAJkzx4T5E-2cQe3dtv2R78dYFvz+in8PY7A8MArvLhs_pg75gg@mail.gmail.com
Based on an analysis of the OpenSSL code with Jacob, moving to EVP for
the cryptohash computations makes necessary the setup of the libcrypto
callbacks that were getting set only for SSL connections, but not for
connections without SSL. Not setting the callbacks makes the use of
threads potentially unsafe for connections calling cryptohashes during
authentication, like MD5 or SCRAM, if a failure happens during a
cryptohash computation. The logic setting the libssl and libcrypto
states is then split into two parts, both using the same locking, with
libcrypto being set up for SSL and non-SSL connections, while SSL
connections set any libssl state afterwards as needed.
Prior to this commit, only SSL connections would have set libcrypto
callbacks that are necessary to ensure a proper thread locking when
using multiple concurrent threads in libpq (ENABLE_THREAD_SAFETY). Note
that this is only required for OpenSSL 1.0.2 and 1.0.1 (oldest version
supported on HEAD), as 1.1.0 has its own internal locking and it has
dropped support for CRYPTO_set_locking_callback().
Tests with up to 300 threads with OpenSSL 1.0.1 and 1.0.2, mixing SSL
and non-SSL connection threads did not show any performance impact after
some micro-benchmarking. pgbench can be used here with -C and a
mostly-empty script (with one \set meta-command for example) to stress
authentication requests, and we have mixed that with some custom
programs for testing.
Reported-by: Jacob Champion
Author: Michael Paquier
Reviewed-by: Jacob Champion
Discussion: https://postgr.es/m/fd3ba610085f1ff54623478cf2f7adf5af193cbb.camel@vmware.com
This partially reverts 096bbf7 and 9d2d457, undoing the libpq changes as
it could cause breakages in distributions that share one single libpq
version across multiple major versions of Postgres for extensions and
applications linking to that.
Note that the backend is unchanged here, and it still disables SSL
compression while simplifying the underlying catalogs that tracked if
compression was enabled or not for a SSL connection.
Per discussion with Tom Lane and Daniel Gustafsson.
Discussion: https://postgr.es/m/YEbq15JKJwIX+S6m@paquier.xyz
The authtype parameter was deprecated and made inactive in commit
d5bbe2aca55bc8, but the environment variable was left defined and thus
tested with a getenv call even though the value is of no use. Also,
if it would exist it would be copied but never freed as the cleanup
code had been removed.
tty was deprecated in commit cb7fb3ca958ec8bd5a14e7 but most of the
infrastructure around it remained in place.
Author: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/DDDF36F3-582A-4C02-8598-9B464CC42B34@yesql.se
Per buildfarm member crake, any servers including a postgres_fdw server
with this option set would fail to do a pg_upgrade properly as the
option got hidden in f9264d1 by becoming a debug option, making the
restore of the FDW server fail.
This changes back the option in libpq to be visible, but still inactive
to fix this upgrade issue.
Discussion: https://postgr.es/m/YEbq15JKJwIX+S6m@paquier.xyz
PostgreSQL disabled compression as of e3bdb2d and the documentation
recommends against using it since. Additionally, SSL compression has
been disabled in OpenSSL since version 1.1.0, and was disabled in many
distributions long before that. The most recent TLS version, TLSv1.3,
disallows compression at the protocol level.
This commit removes the feature itself, removing support for the libpq
parameter sslcompression (parameter still listed for compatibility
reasons with existing connection strings, just ignored), and removes
the equivalent field in pg_stat_ssl and de facto PgBackendSSLStatus.
Note that, on top of removing the ability to activate compression by
configuration, compression is actively disabled in both frontend and
backend to avoid overrides from local configurations.
A TAP test is added for deprecated SSL parameters to check after
backwards compatibility.
Bump catalog version.
Author: Daniel Gustafsson
Reviewed-by: Peter Eisentraut, Magnus Hagander, Michael Paquier
Discussion: https://postgr.es/m/7E384D48-11C5-441B-9EC3-F7DB1F8518F6@yesql.se
Protocol version 3 was introduced in PostgreSQL 7.4. There shouldn't be
many clients or servers left out there without version 3 support. But as
a courtesy, I kept just enough of the old protocol support that we can
still send the "unsupported protocol version" error in v2 format, so that
old clients can display the message properly. Likewise, libpq still
understands v2 ErrorResponse messages when establishing a connection.
The impetus to do this now is that I'm working on a patch to COPY
FROM, to always prefetch some data. We cannot do that safely with the
old protocol, because it requires parsing the input one byte at a time
to detect the end-of-copy marker.
Reviewed-by: Tom Lane, Alvaro Herrera, John Naylor
Discussion: https://www.postgresql.org/message-id/9ec25819-0a8a-d51a-17dc-4150bb3cca3b%40iki.fi
In addition to the existing options of "any" and "read-write", we
now support "read-only", "primary", "standby", and "prefer-standby".
"read-write" retains its previous meaning of "transactions are
read-write by default", and "read-only" inverts that. The other
three modes test specifically for hot-standby status, which is not
quite the same thing. (Setting default_transaction_read_only on
a primary server renders it read-only to this logic, but not a
standby.)
Furthermore, if talking to a v14 or later server, no extra network
round trip is needed to detect the session's status; the GUC_REPORT
variables delivered by the server are enough. When talking to an
older server, a SHOW or SELECT query is issued to detect session
read-only-ness or server hot-standby state, as needed.
Haribabu Kommi, Greg Nancarrow, Vignesh C, Tom Lane; reviewed at
various times by Laurenz Albe, Takayuki Tsunakawa, Peter Smith.
Discussion: https://postgr.es/m/CAF3+xM+8-ztOkaV9gHiJ3wfgENTq97QcjXQt+rbFQ6F7oNzt9A@mail.gmail.com
Add another method to specify CRLs, hashed directory method, for both
server and client side. This offers a means for server or libpq to
load only CRLs that are required to verify a certificate. The CRL
directory is specifed by separate GUC variables or connection options
ssl_crl_dir and sslcrldir, alongside the existing ssl_crl_file and
sslcrl, so both methods can be used at the same time.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/20200731.173911.904649928639357911.horikyota.ntt@gmail.com
"connection to server so-and-so failed:" seems clearer than the
previous wording "could not connect to so-and-so:" (introduced by
52a10224e), because the latter suggests a network-level connection
failure. We're now prefixing this string to all types of connection
failures, for instance authentication failures; so we need wording
that doesn't imply a low-level error.
Per discussion with Robert Haas.
Discussion: https://postgr.es/m/CA+TgmobssJ6rS22dspWnu-oDxXevGmhMD8VcRBjmj-b9UDqRjw@mail.gmail.com
If a server returns ERRCODE_CANNOT_CONNECT_NOW, try the next host,
if multiple host names have been provided. This allows dealing
gracefully with standby servers that might not be in hot standby mode
yet.
In the wake of the preceding commit, it might be plausible to retry
many more error cases than we do now, but I (tgl) am hesitant to
move too aggressively on that --- it's not clear it'd be desirable
for cases such as bad-password, for example. But this case seems
safe enough.
Hubert Zhang, reviewed by Takayuki Tsunakawa
Discussion: https://postgr.es/m/BN6PR05MB3492948E4FD76C156E747E8BC9160@BN6PR05MB3492.namprd05.prod.outlook.com
Prefix "could not connect to host-or-socket-path:" to all connection
failure cases that occur after the socket() call, and remove the
ad-hoc server identity data that was appended to a few of these
messages. This should produce much more intelligible error reports
in multiple-target-host situations, especially for error cases that
are off the beaten track to any degree (because none of those provided
any server identity info).
As an example of the change, formerly a connection attempt with a bad
port number such as "psql -p 12345 -h localhost,/tmp" might produce
psql: error: could not connect to server: Connection refused
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 12345?
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 12345?
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.12345"?
Now it looks like
psql: error: could not connect to host "localhost" (::1), port 12345: Connection refused
Is the server running on that host and accepting TCP/IP connections?
could not connect to host "localhost" (127.0.0.1), port 12345: Connection refused
Is the server running on that host and accepting TCP/IP connections?
could not connect to socket "/tmp/.s.PGSQL.12345": No such file or directory
Is the server running locally and accepting connections on that socket?
This requires adjusting a couple of regression tests to allow for
variation in the contents of a connection failure message.
Discussion: https://postgr.es/m/BN6PR05MB3492948E4FD76C156E747E8BC9160@BN6PR05MB3492.namprd05.prod.outlook.com
Previously, we had an undisciplined mish-mash of printfPQExpBuffer and
appendPQExpBuffer calls to report errors within libpq. This commit
establishes a uniform rule that appendPQExpBuffer[Str] should be used.
conn->errorMessage is reset only at the start of an application request,
and then accumulates messages till we're done. We can remove no less
than three different ad-hoc mechanisms that were used to get the effect
of concatenation of error messages within a sequence of operations.
Although this makes things quite a bit cleaner conceptually, the main
reason to do it is to make the world safer for the multiple-target-host
feature that was added awhile back. Previously, there were many cases
in which an error occurring during an individual host connection attempt
would wipe out the record of what had happened during previous attempts.
(The reporting is still inadequate, in that it can be hard to tell which
host got the failure, but that seems like a matter for a separate commit.)
Currently, lo_import and lo_export contain exceptions to the "never
use printfPQExpBuffer" rule. If we changed them, we'd risk reporting
an incidental lo_close failure before the actual read or write
failure, which would be confusing, not least because lo_close happened
after the main failure. We could improve this by inventing an
internal version of lo_close that doesn't reset the errorMessage; but
we'd also need a version of PQfn() that does that, and it didn't quite
seem worth the trouble for now.
Discussion: https://postgr.es/m/BN6PR05MB3492948E4FD76C156E747E8BC9160@BN6PR05MB3492.namprd05.prod.outlook.com
The critical issue fixed here is that if a GSSAPI-encrypted connection
is successfully made, pqsecure_open_gss() cleared conn->allow_ssl_try,
as an admittedly-hacky way of preventing us from then trying to tunnel
SSL encryption over the already-encrypted connection. The problem
with that is that if we abandon the GSSAPI connection because of a
failure during authentication, we would not attempt SSL encryption
in the next try with the same server. This can lead to unexpected
connection failure, or silently getting a non-encrypted connection
where an encrypted one is expected.
Fortunately, we'd only manage to make a GSSAPI-encrypted connection
if both client and server hold valid tickets in the same Kerberos
infrastructure, which is a relatively uncommon environment.
Nonetheless this is a very nasty bug with potential security
consequences. To fix, don't reset the flag, instead adding a
check for conn->gssenc being already true when deciding whether
to try to initiate SSL.
While here, fix some lesser issues in libpq's GSSAPI code:
* Use the need_new_connection stanza when dropping an attempted
GSSAPI connection, instead of partially duplicating that code.
The consequences of this are pretty minor: AFAICS it could only
lead to auth_req_received or password_needed remaining set when
they shouldn't, which is not too harmful.
* Fix pg_GSS_error() to not repeat the "mprefix" it's given multiple
times, and to notice any failure return from gss_display_status().
* Avoid gratuitous dependency on NI_MAXHOST in
pg_GSS_load_servicename().
Per report from Mikael Gustavsson. Back-patch to v12 where
this code was introduced.
Discussion: https://postgr.es/m/e5b0b6ed05764324a2f3fe7acfc766d5@smhi.se
This is a variant of the normal Unix-domain sockets that don't use the
file system but a separate "abstract" namespace. At the user
interface, such sockets are represented by names starting with "@".
Supported on Linux and Windows right now.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/6dee8574-b0ad-fc49-9c8c-2edc796f0033@2ndquadrant.com
According to Microsoft's documentation, 2.2 has been the current
version since Windows 98 or so. Moreover, that's what the Postgres
backend has been requesting since 2004 (cf commit 4cdf51e64).
So there seems no reason for libpq to keep asking for 1.1.
Bring thread_test along, too, so that we're uniformly asking for 2.2
in all our WSAStartup calls.
It's not clear whether there's any point in back-patching this,
so for now I didn't.
Discussion: https://postgr.es/m/132799.1602960277@sss.pgh.pa.us
The Windows documentation insists that every WSAStartup call should
have a matching WSACleanup call. However, if that ever had actual
relevance, it wasn't in this century. Every remotely-modern Windows
kernel is capable of cleaning up when a process exits without doing
that, and must be so to avoid resource leaks in case of a process
crash. Moreover, Postgres backends have done WSAStartup without
WSACleanup since commit 4cdf51e64 in 2004, and we've never seen any
indication of a problem with that.
libpq's habit of doing WSAStartup during connection start and
WSACleanup during shutdown is also rather inefficient, since a
series of non-overlapping connection requests leads to repeated,
quite expensive DLL unload/reload cycles. We document a workaround
for that (having the application call WSAStartup for itself), but
that's just a kluge. It's also worth noting that it's far from
uncommon for applications to exit without doing PQfinish, and
we've not heard reports of trouble from that either.
However, the real reason for acting on this is that recent
experiments by Alexander Lakhin suggest that calling WSACleanup
during PQfinish might be triggering the symptom we occasionally see
that a process using libpq fails to emit expected stdio output.
Therefore, let's change libpq so that it calls WSAStartup only
once per process, during the first connection attempt, and never
calls WSACleanup at all.
While at it, get rid of the only other WSACleanup call in our code
tree, in pg_dump/parallel.c; that presumably is equally useless.
If this proves to suppress the fairly-common ecpg test failures
we see on Windows, I'll back-patch, but for now let's just do it
in HEAD and see what happens.
Discussion: https://postgr.es/m/ac976d8c-03df-d6b8-025c-15a2de8d9af1@postgrespro.ru
Use a StringInfo instead of a fixed-size buffer in parseServiceInfo().
While we've not heard complaints about the existing 255-byte limit,
it certainly seems possible that complex cases could run afoul of it.
Daniel Gustafsson
Discussion: https://postgr.es/m/48A4FA71-524E-41B9-953A-FD04EF36E2E7@yesql.se
Historically there's been a hard-wired assumption here that no line of
a .pgpass file could be as long as NAMEDATALEN*5 bytes. That's a bit
shaky to start off with, because (a) there's no reason to suppose that
host names fit in NAMEDATALEN, and (b) this figure fails to allow for
backslash escape characters. However, it fails completely if someone
wants to use a very long password, and we're now hearing reports of
people wanting to use "security tokens" that can run up to several
hundred bytes. Another angle is that the file is specified to allow
comment lines, but there's no reason to assume that long comment lines
aren't possible.
Rather than guessing at what might be a more suitable limit, let's
replace the fixed-size buffer with an expansible PQExpBuffer. That
adds one malloc/free cycle to the typical use-case, but that's surely
pretty cheap relative to the I/O this code has to do.
Also, add TAP test cases to exercise this code, because there was no
test coverage before.
This reverts most of commit 2eb3bc588, as there's no longer a need for
a warning message about overlength .pgpass lines. (I kept the explicit
check for comment lines, though.)
In HEAD and v13, this also fixes an oversight in 74a308cf5: there's not
much point in explicit_bzero'ing the line buffer if we only do so in two
of the three exit paths.
Back-patch to all supported branches, except that the test case only
goes back to v10 where src/test/authentication/ was added.
Discussion: https://postgr.es/m/4187382.1598909041@sss.pgh.pa.us
GSS-related resources should be cleaned up in pqDropConnection,
not freePGconn, else the wrong things happen when resetting
a connection or trying to switch to a different server.
It's also critical to reset conn->gssenc there.
During connection setup, initialize conn->try_gss at the correct
place, else switching to a different server won't work right.
Remove now-redundant cleanup of GSS resources around one (and, for
some reason, only one) pqDropConnection call in connectDBStart.
Per report from Kyotaro Horiguchi that psql would freeze up,
rather than successfully resetting a GSS-encrypted connection
after a server restart.
This is YA oversight in commit b0b39f72b, so back-patch to v12.
Discussion: https://postgr.es/m/20200710.173803.435804731896516388.horikyota.ntt@gmail.com
When we initially created this parameter, in commit ff8ca5fad, we left
the default as "allow any protocol version" on grounds of backwards
compatibility. However, that's inconsistent with the backend's default
since b1abfec82; protocol versions prior to 1.2 are not considered very
secure; and OpenSSL has had TLSv1.2 support since 2012, so the number
of PG servers that need a lesser minimum is probably quite small.
On top of those things, it emerges that some popular distros (including
Debian and RHEL) set MinProtocol=TLSv1.2 in openssl.cnf. Thus, far
from having "allow any protocol version" behavior in practice, what
we actually have as things stand is a platform-dependent lower limit.
So, change our minds and set the min version to TLSv1.2. Anybody
wanting to connect with a new libpq to a pre-2012 server can either
set ssl_min_protocol_version=TLSv1 or accept the fallback to non-SSL.
Back-patch to v13 where the aforementioned patches appeared.
Patch by me, reviewed by Daniel Gustafsson
Discussion: https://postgr.es/m/a9408304-4381-a5af-d259-e55d349ae4ce@2ndquadrant.com
Since 74a308c, any security-sensitive information gets cleared from
memory this way. This was forgotten in 4dc6355.
Author: Daniel Gustafsson
Reviewed-by: Peter Eisentraut, Michael Paquier
Discussion: https://postgr.es/m/935443BA-D42E-4CE0-B181-1AD79E6DD45A@yesql.se
The libpq parameters ssl{max|min}protocolversion are renamed to use
underscores, to become ssl_{max|min}_protocol_version. The related
environment variables still use the names introduced in commit ff8ca5f
that added the feature.
Per complaint from Peter Eisentraut (this was also mentioned by me in
the original patch review but the issue got discarded).
Author: Daniel Gustafsson
Reviewed-by: Peter Eisentraut, Michael Paquier
Discussion: https://postgr.es/m/b319e449-318d-e691-4997-1327e166fcc4@2ndquadrant.com
This commit fixes the following two issues around .pgpass file.
(1) If the length of a line in .pgpass file was larger than 319B,
libpq silently treated each 319B in the line as a separate
setting line.
(2) The document explains that a line beginning with # is treated
as a comment in .pgpass. But there was no code doing such
special handling. Whether a line begins with # or not, libpq
just checked that the first token in the line match with the host.
For (1), this commit makes libpq warn if the length of a line
is larger than 319B, and throw away the remaining part beginning
from 320B position.
For (2), this commit changes libpq so that it treats any lines
beginning with # as comments.
Author: Fujii Masao
Reviewed-by: Hamid Akhtar
Discussion: https://postgr.es/m/c0f0c01c-fa74-9749-2084-b73882fd5465@oss.nttdata.com
The error exits added to initialize_SSL() failed to clean up the
partially-built SSL_context, and some of them also leaked the
result of SSLerrmessage(). Make them match other error-handling
cases in that function.
The error exits added to connectOptions2() failed to set conn->status
like every other error exit in that function.
In passing, make the SSL_get_peer_certificate() error exit look more
like all the other calls of SSLerrmessage().
Oversights in commit ff8ca5fad. Coverity whined about leakage of the
SSLerrmessage() results; I noted the rest in manual code review.
We have code paths for Unix socket support and no Unix socket support.
Now add a third variant: Unix socket support but do not use a Unix
socket by default in the client or the server, only if you explicitly
specify one. This will be useful when we enable Unix socket support
on Windows.
To implement this, tweak things so that setting DEFAULT_PGSOCKET_DIR
to "" has the desired effect. This mostly already worked like that;
only a few places needed to be adjusted. Notably, the reference to
DEFAULT_PGSOCKET_DIR in UNIXSOCK_PATH() could be removed because all
callers already resolve an empty socket directory setting with a
default if appropriate.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/75f72249-8ae6-322a-63df-4fe03eeccb9f@2ndquadrant.com
We used to strategically place newlines after some function call left
parentheses to make pgindent move the argument list a few chars to the
left, so that the whole line would fit under 80 chars. However,
pgindent no longer does that, so the newlines just made the code
vertically longer for no reason. Remove those newlines, and reflow some
of those lines for some extra naturality.
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/20200129200401.GA6303@alvherre.pgsql
channel_binding's longest allowed value is not "7", it is actually "8".
gssencmode also got that wrong.
A similar mistake has been fixed as of f4051e3.
Backpatch down to v12, where gssencmode has been introduced.
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20200128053633.GD1552@paquier.xyz
Backpatch-through: 12
These two new parameters, named sslminprotocolversion and
sslmaxprotocolversion, allow to respectively control the minimum and the
maximum version of the SSL protocol used for the SSL connection attempt.
The default setting is to allow any version for both the minimum and the
maximum bounds, causing libpq to rely on the bounds set by the backend
when negotiating the protocol to use for an SSL connection. The bounds
are checked when the values are set at the earliest stage possible as
this makes the checks independent of any SSL implementation.
Author: Daniel Gustafsson
Reviewed-by: Michael Paquier, Cary Huang
Discussion: https://postgr.es/m/4F246AE3-A7AE-471E-BD3D-C799D3748E03@yesql.se
Fix assorted bugs in handling of non-blocking I/O when using GSSAPI
encryption. The encryption layer could return the wrong status
information to its caller, resulting in effectively dropping some data
(or possibly in aborting a not-broken connection), or in a "livelock"
situation where data remains to be sent but the upper layers think
transmission is done and just go to sleep. There were multiple small
thinkos contributing to that, as well as one big one (failure to think
through what to do when a send fails after having already transmitted
data). Note that these errors could cause failures whether the client
application asked for non-blocking I/O or not, since both libpq and
the backend always run things in non-block mode at this level.
Also get rid of use of static variables for GSSAPI inside libpq;
that's entirely not okay given that multiple connections could be
open at once inside a single client process.
Also adjust a bunch of random small discrepancies between the frontend
and backend versions of the send/receive functions -- except for error
handling, they should be identical, and now they are.
Also extend the Kerberos TAP tests to exercise cases where nontrivial
amounts of data need to be pushed through encryption. Before, those
tests didn't provide any useful coverage at all for the cases of
interest here. (They still might not, depending on timing, but at
least there's a chance.)
Per complaint from pmc@citylink and subsequent investigation.
Back-patch to v12 where this code was introduced.
Discussion: https://postgr.es/m/20200109181822.GA74698@gate.oper.dinoex.org