This new libpq function allows the application to send an 'H' message,
which instructs the server to flush its outgoing buffer.
This hasn't been needed so far because the Sync message already requests
a buffer; and I failed to realize that this was needed in pipeline mode
because PQpipelineSync also causes the buffer to be flushed. However,
sometimes it is useful to request a flush without establishing a
synchronization point.
Backpatch to 14, where pipeline mode was introduced in libpq.
Reported-by: Boris Kolpackov <boris@codesynthesis.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/202106252350.t76x73nt643j@alvherre.pgsql
Directly exiting or aborting seems like poor form for a general-purpose
library. Now that libpq liberally uses bits out of src/common/,
it's very easy to accidentally include code that would do something
unwanted like calling exit(1) after OOM --- see for example 8ec00dc5c.
Hence, add a simple cross-check that no such calls have made it into
libpq.so.
The cross-check depends on nm(1) being available and being able to
work on a shared library, which probably isn't true everywhere.
But we can just make the test silently do nothing if nm fails.
As long as the check is effective on common platforms, that should
be good enough. (By the same logic, I've not worried about providing
an equivalent test in MSVC builds.)
Discussion: https://postgr.es/m/3128896.1624742969@sss.pgh.pa.us
Doing an abort() seems all right in development builds, but not in
production builds of general-purpose libraries. However, the functions
that were doing this lack any way to report a failure back up to their
callers. It seems like we can just get away with ignoring failures in
production builds, since (a) no such failures have been reported in the
dozen years that the code's been like this, and (b) failure to enforce
mutual exclusion during fe-auth.c operations would likely not cause any
problems anyway in most cases. (The OpenSSL callbacks that use this
macro are obsolete, so even less likely to cause interesting problems.)
Possibly a better answer would be to break compatibility of the
pgthreadlock_t callback API, but in the absence of field problem
reports, it doesn't really seem worth the trouble.
Discussion: https://postgr.es/m/3131385.1624746109@sss.pgh.pa.us
Causing a core dump on out-of-memory seems pretty unfriendly,
and surely is far outside the expected behavior of a general-purpose
library. Just print an error message (as we did already) and return.
These functions unfortunately don't have an error return convention,
but code using them is probably just looking for a quick-n-dirty
print method and wouldn't bother to check anyway.
Although these functions are semi-deprecated, it still seems
appropriate to back-patch this. In passing, also back-patch
b90e6cef1, just to reduce cosmetic differences between the
branches.
Discussion: https://postgr.es/m/3122443.1624735363@sss.pgh.pa.us
Commit c0cb87fbb unwisely introduced a dependency on the StringInfo
machinery in fe-connect.c. We must not use that in libpq, because
it will do a summary exit(1) if it hits OOM, and that is not
appropriate behavior for a general-purpose library. The goal of
allowing arbitrary line lengths in service files doesn't seem like
it's worth a lot of effort, so revert back to the previous method
of using a stack-allocated buffer and failing on buffer overflow.
This isn't an exact revert though. I kept that patch's refactoring
to have a single exit path, as that seems cleaner than having each
error path know what to do to clean up. Also, I made the fixed-size
buffer 1024 bytes not 256, just to push off the need for an expandable
buffer some more.
There is more to do here; in particular the lack of any mechanical
check for this type of mistake now seems pretty hazardous. But this
fix gets us back to the level of robustness we had in v13, anyway.
Discussion: https://postgr.es/m/daeb22ec6ca8ef61e94d766a9b35fb03cabed38e.camel@vmware.com
Our uses of gss_display_status() and gss_display_name() assumed
that the gss_buffer_desc strings returned by those functions are
null-terminated. It appears that they generally are, given the
lack of field complaints up to now. However, the available
documentation does not promise this, and some man pages
for gss_display_status() show examples that rely on the
gss_buffer_desc.length field instead of expecting null
termination. Also, we now have a report that on some
implementations, clang's address sanitizer is of the opinion
that the byte after the specified length is undefined.
Hence, change the code to rely on the length field instead.
This might well be cosmetic rather than fixing any real bug, but
it's hard to be sure, so back-patch to all supported branches.
While here, also back-patch the v12 changes that made pg_GSS_error
deal honestly with multiple messages available from
gss_display_status.
Per report from Sudheer H R.
Discussion: https://postgr.es/m/5372B6D4-8276-42C0-B8FB-BD0918826FC3@tekenlight.com
We had a request to provide a way to test at compile time for the
availability of the new pipeline features. More generally, it
seems like a good idea to provide a way to test via #ifdef for
all new libpq API features. People have been using the version
from pg_config.h for that; but that's more likely to represent the
server version than the libpq version, in the increasingly-common
scenario where they're different. It's safer if libpq-fe.h itself
is the source of truth about what features it offers.
Hence, establish a policy that starting in v14 we'll add a suitable
feature-is-present macro to libpq-fe.h when we add new API there.
(There doesn't seem to be much point in applying this policy
retroactively, but it's not too late for v14.)
Tom Lane and Alvaro Herrera, per suggestion from Boris Kolpackov.
Discussion: https://postgr.es/m/boris.20210617102439@codesynthesis.com
Commit acb7e4eb6b added a new implementation for PQsendQuery so that
it works in pipeline mode (by using extended query protocol), but it
behaves differently from the 'Q' message (in simple query protocol) used
by regular implementation: the new one doesn't close the unnamed portal.
Change the new code to have identical behavior to the old.
Reported-by: Yura Sokolov <y.sokolov@postgrespro.ru>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/202106072107.d4i55hdscxqj@alvherre.pgsql
We have a dozen PQset*() functions. PQresultSetInstanceData() and this
were the libpq setter functions having a different word order. Adopt
the majority word order.
Reviewed by Alvaro Herrera and Robert Haas, though this choice of name
was not unanimous.
Discussion: https://postgr.es/m/20210605060555.GA216695@rfd.leadboat.com
Buildfarm member hamerkop has been reporting that two cases in
connect/test5.pgc show different error messages than the test expects,
because since commit ffa2e4670 libpq's connection failure messages
are exposing the fact that a GSS-encrypted connection was attempted
and failed. That's pretty interesting information in itself, and
I certainly don't wish to shoot the messenger, but we need to do
something to stabilize the ECPG results.
For the second of these two failure cases, we can add the
gssencmode=disable option to prevent the discrepancy. However,
that solution is problematic for the first failure, because the only
unique thing about that case is that it's testing a completely-omitted
connection target; there's noplace to add the option without defeating
the point of the test case. After some thrashing around with
alternative fixes that turned out to have undesirable side-effects,
the most workable answer is just to give up and remove that test case.
Perhaps we can revert this later, if we figure out why the GSS code
is misbehaving in hamerkop's environment.
Thanks to Michael Paquier for exploration of alternatives.
Discussion: https://postgr.es/m/YLRZH6CWs9N6Pusy@paquier.xyz
The FE/BE protocol identifies parameters with an Int16 index, which
limits the maximum number of parameters per query to 65535. With
batching added to postges_fdw this limit is much easier to hit, as
the whole batch is essentially a single query, making this error much
easier to hit.
The failures are a bit unpredictable, because it also depends on the
number of columns in the query. So instead of just failing, this patch
tweaks the batch_size to not exceed the maximum number of parameters.
Reported-by: Hou Zhijie <houzj.fnst@cn.fujitsu.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://postgr.es/m/OS0PR01MB571603973C0AC2874AD6BF2594299%40OS0PR01MB5716.jpnprd01.prod.outlook.com
An incorrectly-encoded multibyte character near the end of a string
could cause various processing loops to run past the string's
terminating NUL, with results ranging from no detectable issue to
a program crash, depending on what happens to be in the following
memory.
This isn't an issue in the server, because we take care to verify
the encoding of strings before doing any interesting processing
on them. However, that lack of care leaked into client-side code
which shouldn't assume that anyone has validated the encoding of
its input.
Although this is certainly a bug worth fixing, the PG security team
elected not to regard it as a security issue, primarily because
any untrusted text should be sanitized by PQescapeLiteral or
the like before being incorporated into a SQL or psql command.
(If an app fails to do so, the same technique can be used to
cause SQL injection, with probably much more dire consequences
than a mere client-program crash.) Those functions were already
made proof against this class of problem, cf CVE-2006-2313.
To fix, invent PQmblenBounded() which is like PQmblen() except it
won't return more than the number of bytes remaining in the string.
In HEAD we can make this a new libpq function, as PQmblen() is.
It seems imprudent to change libpq's API in stable branches though,
so in the back branches define PQmblenBounded as a macro in the files
that need it. (Note that just changing PQmblen's behavior would not
be a good idea; notably, it would completely break the escaping
functions' defense against this exact problem. So we just want a
version for those callers that don't have any better way of handling
this issue.)
Per private report from houjingyi. Back-patch to all supported branches.
Also "make reformat-dat-files".
The only change worthy of note is that pgindent messed up the formatting
of launcher.c's struct LogicalRepWorkerId, which led me to notice that
that struct wasn't used at all anymore, so I just took it out.
Instead, put them in via a format placeholder. This reduces the
number of distinct translatable messages and also reduces the chances
of typos during translation. We already did this for the system call
arguments in a number of cases, so this is just the same thing taken a
bit further.
Discussion: https://www.postgresql.org/message-id/flat/92d6f545-5102-65d8-3c87-489f71ea0a37%40enterprisedb.com
A (relatively minor) annoyance of ErrorResponse/NoticeResponse messages
as printed by PQtrace() is that their length might vary when we move
error messages from one source file to another, one function to another,
or even when their location line numbers change number of digits.
To avoid having to adjust expected files for some tests, make the
regress mode of PQtrace() suppress the length word of NoticeResponse and
ErrorResponse messages.
Discussion: https://postgr.es/m/20210402023010.GA13563@alvherre.pgsql
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
This adds support for writing CREATE FUNCTION and CREATE PROCEDURE
statements for language SQL with a function body that conforms to the
SQL standard and is portable to other implementations.
Instead of the PostgreSQL-specific AS $$ string literal $$ syntax,
this allows writing out the SQL statements making up the body
unquoted, either as a single statement:
CREATE FUNCTION add(a integer, b integer) RETURNS integer
LANGUAGE SQL
RETURN a + b;
or as a block
CREATE PROCEDURE insert_data(a integer, b integer)
LANGUAGE SQL
BEGIN ATOMIC
INSERT INTO tbl VALUES (a);
INSERT INTO tbl VALUES (b);
END;
The function body is parsed at function definition time and stored as
expression nodes in a new pg_proc column prosqlbody. So at run time,
no further parsing is required.
However, this form does not support polymorphic arguments, because
there is no more parse analysis done at call time.
Dependencies between the function and the objects it uses are fully
tracked.
A new RETURN statement is introduced. This can only be used inside
function bodies. Internally, it is treated much like a SELECT
statement.
psql needs some new intelligence to keep track of function body
boundaries so that it doesn't send off statements when it sees
semicolons that are inside a function body.
Tested-by: Jaime Casanova <jcasanov@systemguards.com.ec>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/1c11f1eb-f00c-43b7-799d-2d44132c02d7@2ndquadrant.com
By default, have libpq set the TLS extension "Server Name Indication" (SNI).
This allows an SNI-aware SSL proxy to route connections. (This
requires a proxy that is aware of the PostgreSQL protocol, not just
any SSL proxy.)
In the future, this could also allow the server to use different SSL
certificates for different host specifications. (That would require
new server functionality. This would be the client-side functionality
for that.)
Since SNI makes the host name appear in cleartext in the network
traffic, this might be undesirable in some cases. Therefore, also add
a libpq connection option "sslsni" to turn it off.
Discussion: https://www.postgresql.org/message-id/flat/7289d5eb-62a5-a732-c3b9-438cee2cb709%40enterprisedb.com
It seems that in MSVC timeval's tv_sec field is of type long.
localtime() takes a time_t pointer. Since long is 32-bit even on 64-bit
builds in MSVC, passing a long pointer instead of the correct time_t
pointer generated a compiler warning. Fix that.
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAApHDvoRG25X_=ZCGSPb4KN_j2iu=G2uXsRSg8NBZeuhkOSETg@mail.gmail.com
Similarly to the cryptohash implementations, this refactors the existing
HMAC code into a single set of APIs that can be plugged with any crypto
libraries PostgreSQL is built with (only OpenSSL currently). If there
is no such libraries, a fallback implementation is available. Those new
APIs are designed similarly to the existing cryptohash layer, so there
is no real new design here, with the same logic around buffer bound
checks and memory handling.
HMAC has a dependency on cryptohashes, so all the cryptohash types
supported by cryptohash{_openssl}.c can be used with HMAC. This
refactoring is an advantage mainly for SCRAM, that included its own
implementation of HMAC with SHA256 without relying on the existing
crypto libraries even if PostgreSQL was built with their support.
This code has been tested on Windows and Linux, with and without
OpenSSL, across all the versions supported on HEAD from 1.1.1 down to
1.0.1. I have also checked that the implementations are working fine
using some sample results, a custom extension of my own, and doing
cross-checks across different major versions with SCRAM with the client
and the backend.
Author: Michael Paquier
Reviewed-by: Bruce Momjian
Discussion: https://postgr.es/m/X9m0nkEJEzIPXjeZ@paquier.xyz
It's misplaced there -- it's not libpq's output stream to tweak in that
way. In particular, POSIX says that it has to be called before any
other operation on the file, so if a stream previously used by the
calling application, bad things may happen.
Put setvbuf() in libpq_pipeline for good measure.
Also, reduce fopen(..., "w+") to just fopen(..., "w") in
libpq_pipeline.c. It's not clear that this fixes anything, but we don't
use w+ anywhere.
Per complaints from Tom Lane.
Discussion: https://postgr.es/m/3337422.1617229905@sss.pgh.pa.us
Failing to do this can cause a crash, and I suspect is what has happened
with a buildfarm member reporting mysterious failures.
This is an ancient bug, but I'm not backpatching since evidently nobody
cares about PQtrace in older releases.
Discussion: https://postgr.es/m/3333908.1617227066@sss.pgh.pa.us
We must cast the arguments of <ctype.h> functions to unsigned
char to avoid problems where char is signed.
Speaking of which, considering that this *is* a <ctype.h> function,
it's rather remarkable that we aren't seeing more complaints about
not having included that header.
Per buildfarm.
Remove confusion between time_t and pg_time_t; neither
gettimeofday() nor localtime() deal in the latter.
libpq indeed has no business using <pgtime.h> at all.
Use snprintf not sprintf, to ensure we can't overrun the
supplied buffer. (Unlikely, but let's be safe.)
Per buildfarm.
Commit 92785dac2 copied some logic related to advancement of inStart
from pqParseInput3 into getRowDescriptions and getAnotherTuple,
because it wanted to allow user-defined row processor callbacks to
potentially longjmp out of the library, and inStart would have to be
updated before that happened to avoid an infinite loop. We later
decided that that API was impossibly fragile and reverted it, but
we didn't undo all of the related code changes, and this bit of
messiness survived. Undo it now so that there's just one place in
pqParseInput3's processing where inStart is advanced; this will
simplify addition of better tracing support.
getParamDescriptions had grown similar processing somewhere along
the way (not in 92785dac2; I didn't track down just when), but it's
actually buggy because its handling of corrupt-message cases seems to
have been copied from the v2 logic where we lacked a known message
length. The cases where we "goto not_enough_data" should not simply
return EOF, because then we won't consume the message, potentially
creating an infinite loop. That situation now represents a
definitively corrupt message, and we should report it as such.
Although no field reports of getParamDescriptions getting stuck in
a loop have been seen, it seems appropriate to back-patch that fix.
I chose to back-patch all of this to keep the logic looking more alike
in supported branches.
Discussion: https://postgr.es/m/2217283.1615411989@sss.pgh.pa.us
Based on an analysis of the OpenSSL code with Jacob, moving to EVP for
the cryptohash computations makes necessary the setup of the libcrypto
callbacks that were getting set only for SSL connections, but not for
connections without SSL. Not setting the callbacks makes the use of
threads potentially unsafe for connections calling cryptohashes during
authentication, like MD5 or SCRAM, if a failure happens during a
cryptohash computation. The logic setting the libssl and libcrypto
states is then split into two parts, both using the same locking, with
libcrypto being set up for SSL and non-SSL connections, while SSL
connections set any libssl state afterwards as needed.
Prior to this commit, only SSL connections would have set libcrypto
callbacks that are necessary to ensure a proper thread locking when
using multiple concurrent threads in libpq (ENABLE_THREAD_SAFETY). Note
that this is only required for OpenSSL 1.0.2 and 1.0.1 (oldest version
supported on HEAD), as 1.1.0 has its own internal locking and it has
dropped support for CRYPTO_set_locking_callback().
Tests with up to 300 threads with OpenSSL 1.0.1 and 1.0.2, mixing SSL
and non-SSL connection threads did not show any performance impact after
some micro-benchmarking. pgbench can be used here with -C and a
mostly-empty script (with one \set meta-command for example) to stress
authentication requests, and we have mixed that with some custom
programs for testing.
Reported-by: Jacob Champion
Author: Michael Paquier
Reviewed-by: Jacob Champion
Discussion: https://postgr.es/m/fd3ba610085f1ff54623478cf2f7adf5af193cbb.camel@vmware.com
This partially reverts 096bbf7 and 9d2d457, undoing the libpq changes as
it could cause breakages in distributions that share one single libpq
version across multiple major versions of Postgres for extensions and
applications linking to that.
Note that the backend is unchanged here, and it still disables SSL
compression while simplifying the underlying catalogs that tracked if
compression was enabled or not for a SSL connection.
Per discussion with Tom Lane and Daniel Gustafsson.
Discussion: https://postgr.es/m/YEbq15JKJwIX+S6m@paquier.xyz
The authtype parameter was deprecated and made inactive in commit
d5bbe2aca5, but the environment variable was left defined and thus
tested with a getenv call even though the value is of no use. Also,
if it would exist it would be copied but never freed as the cleanup
code had been removed.
tty was deprecated in commit cb7fb3ca95 but most of the
infrastructure around it remained in place.
Author: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/DDDF36F3-582A-4C02-8598-9B464CC42B34@yesql.se
Per buildfarm member crake, any servers including a postgres_fdw server
with this option set would fail to do a pg_upgrade properly as the
option got hidden in f9264d1 by becoming a debug option, making the
restore of the FDW server fail.
This changes back the option in libpq to be visible, but still inactive
to fix this upgrade issue.
Discussion: https://postgr.es/m/YEbq15JKJwIX+S6m@paquier.xyz
PostgreSQL disabled compression as of e3bdb2d and the documentation
recommends against using it since. Additionally, SSL compression has
been disabled in OpenSSL since version 1.1.0, and was disabled in many
distributions long before that. The most recent TLS version, TLSv1.3,
disallows compression at the protocol level.
This commit removes the feature itself, removing support for the libpq
parameter sslcompression (parameter still listed for compatibility
reasons with existing connection strings, just ignored), and removes
the equivalent field in pg_stat_ssl and de facto PgBackendSSLStatus.
Note that, on top of removing the ability to activate compression by
configuration, compression is actively disabled in both frontend and
backend to avoid overrides from local configurations.
A TAP test is added for deprecated SSL parameters to check after
backwards compatibility.
Bump catalog version.
Author: Daniel Gustafsson
Reviewed-by: Peter Eisentraut, Magnus Hagander, Michael Paquier
Discussion: https://postgr.es/m/7E384D48-11C5-441B-9EC3-F7DB1F8518F6@yesql.se