Commit b8ba7344e9 added in PQgetResult a derefence to a pointer
returned by pqPrepareAsyncResult(), before some other code that was
already testing that pointer for nullness. But since commit
618c16707a (in Postgres 15), pqPrepareAsyncResult() doesn't ever
return NULL (a statically-allocated result is returned if OOM). So in
branches 15 and up, we can remove the redundant pointer check with no
harm done.
However, in branch 14, pqPrepareAsyncResult() can indeed return NULL if
it runs out of memory. Fix things there by adding a null pointer check
before dereferencing the pointer. This should hint Coverity that the
preexisting check is not redundant but necessary.
Backpatch to 14, like b8ba7344e9.
Per Coverity.
The logic to keep the libpq command queue in sync with queries that have
been processed had a bug when errors were returned for reasons other
than problems in queries -- for example, when a connection is lost. We
incorrectly consumed an element from the command queue every time, but
this is wrong and can lead to the queue becoming empty ahead of time,
leading to later malfunction: PQgetResult would return nothing,
potentially causing the calling application to enter a busy loop.
Fix by making the SYNC queue element a barrier that can only be consumed
when a SYNC message is received.
Backpatch to 14.
Reported by: Иван Трофимов (Ivan Trofimov) <i.trofimow@yandex.ru>
Discussion: https://postgr.es/m/17948-fcace7557e449957@postgresql.org
When PQsendFlushRequest() was added by commit 69cf1d5429, we argued
against adding a PQflush() call in it[1]. This is still the right
decision: if the user wants a flush to occur, they can just call that.
However, we failed to realize that the message bytes could still be
given to the kernel for transmitting when this can be made without
blocking. That's what pqPipelineFlush() does, and it is done for every
single other message type sent by libpq, so do that.
(When the socket is in blocking mode this may indeed block, but that's
what all the other libpq message-sending routines do, too.)
[1] https://www.postgresql.org/message-id/202106252352.5ca4byasfun5%40alvherre.pgsql
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Discussion: https://postgr.es/m/CAGECzQTxZRevRWkKodE-SnJk1Yfm4eKT+8E4Cyq3MJ9YKTnNew@mail.gmail.com
When a query whose results were requested in single-row mode is the last
in the queue by the time those results are being read, the single-row
flag was not being reset, because we were returning early from
pqPipelineProcessQueue. Move that stanza up so that the flag is always
reset at the end of sending that query's results.
Add a test for the situation.
Backpatch to 14.
Author: Denis Laxalde <denis.laxalde@dalibo.com>
Discussion: https://postgr.es/m/01af18c5-dacc-a8c8-07ee-aecc7650c3e8@dalibo.com
The extended query protocol implementation I added in commit
acb7e4eb6b has bugs when used in pipeline mode. Rather than spend
more time trying to fix it, remove that code and make the function rely
on simple query protocol only, meaning it can no longer be used in
pipeline mode.
Users can easily change their applications to use PQsendQueryParams
instead. We leave PQsendQuery in place for Postgres 14, just in case
somebody is using it and has not hit the mentioned bugs; but we should
recommend that it not be used.
Backpatch to 15.
Per bug report from Gabriele Varrazzo.
Discussion: https://postgr.es/m/CA+mi_8ZGSQNmW6-mk_iSR4JZB_LJ4ww3suOF+1vGNs3MrLsv4g@mail.gmail.com
There's a convention that externally-visible libpq functions should
check for a NULL PGconn pointer, and fail gracefully instead of
crashing. PQflush() and PQisnonblocking() didn't get that memo
though. Also add a similar check to PQdefaultSSLKeyPassHook_OpenSSL;
while it's not clear that ordinary usage could reach that with a
null conn pointer, it's cheap enough to check, so let's be consistent.
Daniele Varrazzo and Tom Lane
Discussion: https://postgr.es/m/CA+mi_8Zm_mVVyW1iNFgyMd9Oh0Nv8-F+7Y3-BqwMgTMHuo_h2Q@mail.gmail.com
Commit 054325c5ee created a memory leak in PQsendQueryInternal in case
an error occurs while sending the message. Repair.
Backpatch to 14, like that commit. Reported by Coverity.
We were going into IDLE state too soon when executing queries via
PQsendQuery in pipeline mode, causing several scenarios to misbehave in
different ways -- most notably, as reported by Daniele Varrazzo, that a
warning message is produced by libpq:
message type 0x33 arrived from server while idle
But it is also possible, if queries are sent and results consumed not in
lockstep, for the expected mediating NULL result values from PQgetResult
to be lost (a problem which has not been reported, but which is more
serious).
Fix this by introducing two new concepts: one is a command queue element
PGQUERY_CLOSE to tell libpq to wait for the CloseComplete server
response to the Close message that is sent by PQsendQuery. Because the
application is not expecting any PGresult from this, the mechanism to
consume it is a bit hackish.
The other concept, authored by Horiguchi-san, is a PGASYNC_PIPELINE_IDLE
state for libpq's state machine to differentiate "really idle" from
merely "the idle state that occurs in between reading results from the
server for elements in the pipeline". This makes libpq not go fully
IDLE when the libpq command queue contains entries; in normal cases, we
only go IDLE once at the end of the pipeline, when the server response
to the final SYNC message is received. (However, there are corner cases
it doesn't fix, such as terminating the query sequence by
PQsendFlushRequest instead of PQpipelineSync; this sort of scenario is
what requires PGQUERY_CLOSE bit above.)
This last bit helps make the libpq state machine clearer; in particular
we can get rid of an ugly hack in pqParseInput3 to avoid considering
IDLE as such when the command queue contains entries.
A new test mode is added to libpq_pipeline.c to tickle some related
problematic cases.
Reported-by: Daniele Varrazzo <daniele.varrazzo@gmail.com>
Co-authored-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/CA+mi_8bvD0_CW3sumgwPvWdNzXY32itoG_16tDYRu_1S2gV2iw@mail.gmail.com
Commit 618c16707 invented an "error_result" flag in PGconn, which
intends to represent the state that we have an error condition and
need to build a PGRES_FATAL_ERROR PGresult from the message text in
conn->errorMessage, but have not yet done so. (Postponing construction
of the error object simplifies dealing with out-of-memory conditions
and with concatenation of messages for multiple errors.) For nearly all
purposes, this "virtual" PGresult object should act the same as if it
were already materialized. But a couple of places in fe-protocol3.c
didn't get that memo, and were only testing conn->result as they used
to, without also checking conn->error_result.
In hopes of reducing the probability of similar mistakes in future,
I invented a pgHavePendingResult() macro that includes both tests.
Per report from Peter Eisentraut.
Discussion: https://postgr.es/m/b52277b9-fa66-b027-4a37-fb8989c73ff8@enterprisedb.com
MERGE performs actions that modify rows in the target table using a
source table or query. MERGE provides a single SQL statement that can
conditionally INSERT/UPDATE/DELETE rows -- a task that would otherwise
require multiple PL statements. For example,
MERGE INTO target AS t
USING source AS s
ON t.tid = s.sid
WHEN MATCHED AND t.balance > s.delta THEN
UPDATE SET balance = t.balance - s.delta
WHEN MATCHED THEN
DELETE
WHEN NOT MATCHED AND s.delta > 0 THEN
INSERT VALUES (s.sid, s.delta)
WHEN NOT MATCHED THEN
DO NOTHING;
MERGE works with regular tables, partitioned tables and inheritance
hierarchies, including column and row security enforcement, as well as
support for row and statement triggers and transition tables therein.
MERGE is optimized for OLTP and is parameterizable, though also useful
for large scale ETL/ELT. MERGE is not intended to be used in preference
to existing single SQL commands for INSERT, UPDATE or DELETE since there
is some overhead. MERGE can be used from PL/pgSQL.
MERGE does not support targetting updatable views or foreign tables, and
RETURNING clauses are not allowed either. These limitations are likely
fixable with sufficient effort. Rewrite rules are also not supported,
but it's not clear that we'd want to support them.
Author: Pavan Deolasee <pavan.deolasee@gmail.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Author: Amit Langote <amitlangote09@gmail.com>
Author: Simon Riggs <simon.riggs@enterprisedb.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: Andres Freund <andres@anarazel.de> (earlier versions)
Reviewed-by: Peter Geoghegan <pg@bowt.ie> (earlier versions)
Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier versions)
Reviewed-by: Japin Li <japinli@hotmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Discussion: https://postgr.es/m/CANP8+jKitBSrB7oTgT9CY2i1ObfOt36z0XMraQc+Xrz8QB0nXA@mail.gmail.com
Discussion: https://postgr.es/m/CAH2-WzkJdBuxj9PO=2QaO9-3h3xGbQPZ34kJH=HukRekwM-GZg@mail.gmail.com
Discussion: https://postgr.es/m/20201231134736.GA25392@alvherre.pgsql
Since commit ffa2e4670, libpq resets conn->errorMessage only when
starting a new query. However, the later introduction of pipelining
requires a further refinement: the "start of query" isn't necessarily
when it's submitted to PQsendQueryStart. If we clear at that point
then we risk dropping text for an error that the application has not
noticed yet. Instead, when queuing a query while a previous query is
still in flight, leave errorMessage alone; reset it when we begin
to process the next query in pqPipelineProcessQueue.
Perhaps this should be back-patched to v14 where ffa2e4670 came in.
However I'm uncertain about whether it interacts with 618c16707.
In the absence of user complaints, leave v14 alone.
Discussion: https://postgr.es/m/1421785.1645723238@sss.pgh.pa.us
Since commit ffa2e4670, libpq accumulates text in conn->errorMessage
across a whole query cycle. In some situations, we may report more
than one error event within a cycle: the easiest case to reach is
where we report a FATAL error message from the server, and then a
bit later we detect loss of connection. Since, historically, each
error PGresult bears the entire content of conn->errorMessage,
this results in duplication of the FATAL message in any output that
concatenates the contents of the PGresults.
Accumulation in errorMessage still seems like a good idea, especially
in view of the number of places that did ad-hoc error concatenation
before ffa2e4670. So to fix this, let's track how much of
conn->errorMessage has been read out into error PGresults, and only
include new text in later PGresults. The tricky part of that is
to be sure that we never discard an error PGresult once made (else
we'd risk dropping some text, a problem much worse than duplication).
While libpq formerly did that in some code paths, a little bit of
rearrangement lets us postpone making an error PGresult at all until
we are about to return it.
A side benefit of that postponement is that it now becomes practical
to return a dummy static PGresult in cases where we hit out-of-memory
while trying to manufacture an error PGresult. This eliminates the
admittedly-very-rare case where we'd return NULL from PQgetResult,
indicating successful query completion, even though what actually
happened was an OOM failure.
Discussion: https://postgr.es/m/ab4288f8-be5c-57fb-2400-e3e857f53e46@enterprisedb.com
As currently implemented, failure of a PGEVT_RESULTCREATE callback
causes the PGresult to be converted to an error result. This is
intellectually inconsistent (shouldn't a failing callback likewise
prevent creation of the error result? what about side-effects on the
behavior seen by other event procs? why does PQfireResultCreateEvents
act differently from PQgetResult?), but more importantly it destroys
any promises we might wish to make about the behavior of libpq in
nontrivial operating modes, such as pipeline mode. For example,
it's not possible to promise that PGRES_PIPELINE_SYNC results will
be returned if an event callback fails on those. With this
definition, expecting applications to behave sanely in the face of
possibly-failing callbacks seems like a very big lift.
Hence, redefine the result of a callback failure as being simply
that that event procedure won't be called any more for this PGresult
(which was true already). Event procedures can still signal failure
back to the application through out-of-band mechanisms, for example
via their passthrough arguments.
Similarly, don't let failure of a PGEVT_RESULTCOPY callback prevent
PQcopyResult from succeeding. That definition allowed a misbehaving
event proc to break single-row mode (our sole internal use of
PQcopyResult), and it probably had equally deleterious effects for
outside uses.
Discussion: https://postgr.es/m/3185105.1644960083@sss.pgh.pa.us
In commit 1f39a1c06 I made PQisBusy consider conn->write_failed, but
that is now looking like complete brain fade. In the first place, the
logic is quite wrong: it ought to be like "and not" rather than "or".
This meant that once we'd gotten into a write_failed state, PQisBusy
would always return true, probably causing the calling application to
iterate its loop until PQconsumeInput returns a hard failure thanks
to connection loss. That's not what we want: the intended behavior
is to return an error PGresult, which the application probably has
much cleaner support for.
But in the second place, checking write_failed here seems like the
wrong thing anyway. The idea of the write_failed mechanism is to
postpone handling of a write failure until we've read all we can from
the server; so that flag should not interfere with input-processing
behavior. (Compare 7247e243a.) What we *should* check for is
status = CONNECTION_BAD, ie, socket already closed. (Most places that
close the socket don't touch asyncStatus, but they do reset status.)
This primarily ensures that if PQisBusy() returns true then there is
an open socket, which is assumed by several call sites in our own
code, and probably other applications too.
While at it, fix a nearby thinko in libpq's my_sock_write: we should
only consult errno for res < 0, not res == 0. This is harmless since
pqsecure_raw_write would force errno to zero in such a case, but it
still could confuse readers.
Noted by Andres Freund. Backpatch to v12 where 1f39a1c06 came in.
Discussion: https://postgr.es/m/20220211011025.ek7exh6owpzjyudn@alap3.anarazel.de
Commit ffa2e4670 changed libpq so that multiple error reports
occurring during one operation (a connection attempt or query)
are accumulated in conn->errorMessage, where before new ones
usually replaced any prior error. At least in theory, that makes
us more vulnerable to running out of memory for the errorMessage
buffer. If it did happen, the user would be left with just an
empty-string error report, which is pretty unhelpful.
We can improve this by relying on pqexpbuffer.c's existing "broken
buffer" convention to track whether we've hit OOM for the current
operation's error string, and then substituting a constant "out of
memory" string in the small number of places where the errorMessage
is read out.
While at it, apply the same method to similar OOM cases in
pqInternalNotice and pqGetErrorNotice3.
Back-patch to v14 where ffa2e4670 came in. In principle this could
go back further; but in view of the lack of field reports, the
hazard seems negligible in older branches.
Discussion: https://postgr.es/m/530153.1627425648@sss.pgh.pa.us
When sending queries in pipeline mode, we were careless about leaving
the connection in the right state so that PQgetResult would behave
correctly; trying to read further results after sending a query after
having read a result with an error would sometimes hang. Fix by
ensuring internal libpq state is changed properly. All the state
changes were being done by the callers of pqAppendCmdQueueEntry(); it
would have become too repetitious to have this logic in each of them, so
instead put it all in that function and relieve callers of the
responsibility.
Add a test to verify this case. Without the code fix, this new test
hangs sometimes.
Also, document that PQisBusy() would return false when no queries are
pending result. This is not intuitively obvious, and NULL would be
obtained by calling PQgetResult() at that point, which is confusing.
Wording by Boris Kolpackov.
In passing, fix bogus use of "false" to mean "0", per Ranier Vilela.
Backpatch to 14.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Boris Kolpackov <boris@codesynthesis.com>
Discussion: https://postgr.es/m/boris.20210624103805@codesynthesis.com
The original coding required that PQpipelineSync had been called before
the first call to PQgetResult, and failure to do that would result in an
unexpected NULL result being returned. Fix by setting the right state
when a query is sent, rather than leaving it unchanged and having
PQpipelineSync apply the necessary state change.
A new test case to verify the behavior is added, which relies on the new
PQsendFlushRequest() function added by commit a7192326c7.
Backpatch to 14, where pipeline mode was added.
Reported-by: Boris Kolpackov <boris@codesynthesis.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/boris.20210616110321@codesynthesis.com
This new libpq function allows the application to send an 'H' message,
which instructs the server to flush its outgoing buffer.
This hasn't been needed so far because the Sync message already requests
a buffer; and I failed to realize that this was needed in pipeline mode
because PQpipelineSync also causes the buffer to be flushed. However,
sometimes it is useful to request a flush without establishing a
synchronization point.
Backpatch to 14, where pipeline mode was introduced in libpq.
Reported-by: Boris Kolpackov <boris@codesynthesis.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/202106252350.t76x73nt643j@alvherre.pgsql
Commit acb7e4eb6b added a new implementation for PQsendQuery so that
it works in pipeline mode (by using extended query protocol), but it
behaves differently from the 'Q' message (in simple query protocol) used
by regular implementation: the new one doesn't close the unnamed portal.
Change the new code to have identical behavior to the old.
Reported-by: Yura Sokolov <y.sokolov@postgrespro.ru>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/202106072107.d4i55hdscxqj@alvherre.pgsql
The FE/BE protocol identifies parameters with an Int16 index, which
limits the maximum number of parameters per query to 65535. With
batching added to postges_fdw this limit is much easier to hit, as
the whole batch is essentially a single query, making this error much
easier to hit.
The failures are a bit unpredictable, because it also depends on the
number of columns in the query. So instead of just failing, this patch
tweaks the batch_size to not exceed the maximum number of parameters.
Reported-by: Hou Zhijie <houzj.fnst@cn.fujitsu.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://postgr.es/m/OS0PR01MB571603973C0AC2874AD6BF2594299%40OS0PR01MB5716.jpnprd01.prod.outlook.com
Protocol version 3 was introduced in PostgreSQL 7.4. There shouldn't be
many clients or servers left out there without version 3 support. But as
a courtesy, I kept just enough of the old protocol support that we can
still send the "unsupported protocol version" error in v2 format, so that
old clients can display the message properly. Likewise, libpq still
understands v2 ErrorResponse messages when establishing a connection.
The impetus to do this now is that I'm working on a patch to COPY
FROM, to always prefetch some data. We cannot do that safely with the
old protocol, because it requires parsing the input one byte at a time
to detect the end-of-copy marker.
Reviewed-by: Tom Lane, Alvaro Herrera, John Naylor
Discussion: https://www.postgresql.org/message-id/9ec25819-0a8a-d51a-17dc-4150bb3cca3b%40iki.fi
In addition to the existing options of "any" and "read-write", we
now support "read-only", "primary", "standby", and "prefer-standby".
"read-write" retains its previous meaning of "transactions are
read-write by default", and "read-only" inverts that. The other
three modes test specifically for hot-standby status, which is not
quite the same thing. (Setting default_transaction_read_only on
a primary server renders it read-only to this logic, but not a
standby.)
Furthermore, if talking to a v14 or later server, no extra network
round trip is needed to detect the session's status; the GUC_REPORT
variables delivered by the server are enough. When talking to an
older server, a SHOW or SELECT query is issued to detect session
read-only-ness or server hot-standby state, as needed.
Haribabu Kommi, Greg Nancarrow, Vignesh C, Tom Lane; reviewed at
various times by Laurenz Albe, Takayuki Tsunakawa, Peter Smith.
Discussion: https://postgr.es/m/CAF3+xM+8-ztOkaV9gHiJ3wfgENTq97QcjXQt+rbFQ6F7oNzt9A@mail.gmail.com
Previously, we had an undisciplined mish-mash of printfPQExpBuffer and
appendPQExpBuffer calls to report errors within libpq. This commit
establishes a uniform rule that appendPQExpBuffer[Str] should be used.
conn->errorMessage is reset only at the start of an application request,
and then accumulates messages till we're done. We can remove no less
than three different ad-hoc mechanisms that were used to get the effect
of concatenation of error messages within a sequence of operations.
Although this makes things quite a bit cleaner conceptually, the main
reason to do it is to make the world safer for the multiple-target-host
feature that was added awhile back. Previously, there were many cases
in which an error occurring during an individual host connection attempt
would wipe out the record of what had happened during previous attempts.
(The reporting is still inadequate, in that it can be hard to tell which
host got the failure, but that seems like a matter for a separate commit.)
Currently, lo_import and lo_export contain exceptions to the "never
use printfPQExpBuffer" rule. If we changed them, we'd risk reporting
an incidental lo_close failure before the actual read or write
failure, which would be confusing, not least because lo_close happened
after the main failure. We could improve this by inventing an
internal version of lo_close that doesn't reset the errorMessage; but
we'd also need a version of PQfn() that does that, and it didn't quite
seem worth the trouble for now.
Discussion: https://postgr.es/m/BN6PR05MB3492948E4FD76C156E747E8BC9160@BN6PR05MB3492.namprd05.prod.outlook.com
Originally, if libpq got a failure (e.g., ECONNRESET) while trying to
send data to the server, it would just report that and wash its hands
of the matter. It was soon found that that wasn't a very pleasant way
of coping with server-initiated disconnections, so we introduced a hack
(pqHandleSendFailure) in the code that sends queries to make it peek
ahead for server error reports before reporting the send failure.
It now emerges that related cases can occur during connection setup;
in particular, as of TLS 1.3 it's unsafe to assume that SSL connection
failures will be reported by SSL_connect rather than during our first
send attempt. We could have fixed that in a hacky way by applying
pqHandleSendFailure after a startup packet send failure, but
(a) pqHandleSendFailure explicitly disclaims suitability for use in any
state except query startup, and (b) the problem still potentially exists
for other send attempts in libpq.
Instead, let's fix this in a more general fashion by eliminating
pqHandleSendFailure altogether, and instead arranging to postpone
all reports of send failures in libpq until after we've made an
attempt to read and process server messages. The send failure won't
be reported at all if we find a server message or detect input EOF.
(Note: this removes one of the reasons why libpq typically overwrites,
rather than appending to, conn->errorMessage: pqHandleSendFailure needed
that behavior so that the send failure report would be replaced if we
got a server message or read failure report. Eventually I'd like to get
rid of that overwrite behavior altogether, but today is not that day.
For the moment, pqSendSome is assuming that its callees will overwrite
not append to conn->errorMessage.)
Possibly this change should get back-patched someday; but it needs
testing first, so let's not consider that till after v12 beta.
Discussion: https://postgr.es/m/CAEepm=2n6Nv+5tFfe8YnkUm1fXgvxR0Mm1FoD+QKG-vLNGLyKg@mail.gmail.com
PQnotifies() is defined to just process already-read data, not try to read
any more from the socket. (This is a debatable decision, perhaps, but I'm
hesitant to change longstanding library behavior.) The documentation has
long recommended calling PQconsumeInput() before PQnotifies() to ensure
that any already-arrived message would get absorbed and processed.
However, psql did not get that memo, which explains why it's not very
reliable about reporting notifications promptly.
Also, most (not quite all) callers called PQconsumeInput() just once before
a PQnotifies() loop. Taking this recommendation seriously implies that we
should do PQconsumeInput() before each call. This is more important now
that we have "payload" strings in notification messages than it was before;
that increases the probability of having more than one packet's worth
of notify messages. Hence, adjust code as well as documentation examples
to do it like that.
Back-patch to 9.5 to match related server fixes. In principle we could
probably go back further with these changes, but given lack of field
complaints I doubt it's worthwhile.
Discussion: https://postgr.es/m/CAOYf6ec-TmRYjKBXLLaGaB-jrd=mjG1Hzn1a1wufUAR39PQYhw@mail.gmail.com
This reverts commits d204ef6377,
83454e3c2b and a few more commits thereafter
(complete list at the end) related to MERGE feature.
While the feature was fully functional, with sufficient test coverage and
necessary documentation, it was felt that some parts of the executor and
parse-analyzer can use a different design and it wasn't possible to do that in
the available time. So it was decided to revert the patch for PG11 and retry
again in the future.
Thanks again to all reviewers and bug reporters.
List of commits reverted, in reverse chronological order:
f1464c5380 Improve parse representation for MERGE
ddb4158579 MERGE syntax diagram correction
530e69e59b Allow cpluspluscheck to pass by renaming variable
01b88b4df5 MERGE minor errata
3af7b2b0d4 MERGE fix variable warning in non-assert builds
a5d86181ec MERGE INSERT allows only one VALUES clause
4b2d44031f MERGE post-commit review
4923550c20 Tab completion for MERGE
aa3faa3c7a WITH support in MERGE
83454e3c2b New files for MERGE
d204ef6377 MERGE SQL Command following SQL:2016
Author: Pavan Deolasee
Reviewed-by: Michael Paquier
MERGE performs actions that modify rows in the target table
using a source table or query. MERGE provides a single SQL
statement that can conditionally INSERT/UPDATE/DELETE rows
a task that would other require multiple PL statements.
e.g.
MERGE INTO target AS t
USING source AS s
ON t.tid = s.sid
WHEN MATCHED AND t.balance > s.delta THEN
UPDATE SET balance = t.balance - s.delta
WHEN MATCHED THEN
DELETE
WHEN NOT MATCHED AND s.delta > 0 THEN
INSERT VALUES (s.sid, s.delta)
WHEN NOT MATCHED THEN
DO NOTHING;
MERGE works with regular and partitioned tables, including
column and row security enforcement, as well as support for
row, statement and transition triggers.
MERGE is optimized for OLTP and is parameterizable, though
also useful for large scale ETL/ELT. MERGE is not intended
to be used in preference to existing single SQL commands
for INSERT, UPDATE or DELETE since there is some overhead.
MERGE can be used statically from PL/pgSQL.
MERGE does not yet support inheritance, write rules,
RETURNING clauses, updatable views or foreign tables.
MERGE follows SQL Standard per the most recent SQL:2016.
Includes full tests and documentation, including full
isolation tests to demonstrate the concurrent behavior.
This version written from scratch in 2017 by Simon Riggs,
using docs and tests originally written in 2009. Later work
from Pavan Deolasee has been both complex and deep, leaving
the lead author credit now in his hands.
Extensive discussion of concurrency from Peter Geoghegan,
with thanks for the time and effort contributed.
Various issues reported via sqlsmith by Andreas Seltenreich
Authors: Pavan Deolasee, Simon Riggs
Reviewer: Peter Geoghegan, Amit Langote, Tomas Vondra, Simon Riggs
Discussion:
https://postgr.es/m/CANP8+jKitBSrB7oTgT9CY2i1ObfOt36z0XMraQc+Xrz8QB0nXA@mail.gmail.comhttps://postgr.es/m/CAH2-WzkJdBuxj9PO=2QaO9-3h3xGbQPZ34kJH=HukRekwM-GZg@mail.gmail.com
The lower case spellings are C and C++ standard and are used in most
parts of the PostgreSQL sources. The upper case spellings are only used
in some files/modules. So standardize on the standard spellings.
The APIs for ICU, Perl, and Windows define their own TRUE and FALSE, so
those are left as is when using those APIs.
In code comments, we use the lower-case spelling for the C concepts and
keep the upper-case spelling for the SQL concepts.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
It is equivalent in ANSI C to write (*funcptr) () and funcptr(). These
two styles have been applied inconsistently. After discussion, we'll
use the more verbose style for plain function pointer variables, to make
it clear that it's a variable, and the shorter style when the function
pointer is in a struct (s.func() or s->func()), because then it's clear
that it's not a plain function name, and otherwise the excessive
punctuation makes some of those invocations hard to read.
Discussion: https://www.postgresql.org/message-id/f52c16db-14ed-757d-4b48-7ef360b1631d@2ndquadrant.com
Adding more than 1 billion rows to a PGresult would overflow its ntups and
tupArrSize fields, leading to client crashes. It'd be desirable to use
wider fields on 64-bit machines, but because all of libpq's external APIs
use plain "int" for row counters, that's going to be hard to accomplish
without an ABI break. Given the lack of complaints so far, and the general
pain that would be involved in using such huge PGresults, let's settle for
just preventing the overflow and reporting a useful error message if it
does happen. Also, for a couple more lines of code we can increase the
threshold of trouble from INT_MAX/2 to INT_MAX rows.
To do that, refactor pqAddTuple() to allow returning an error message that
replaces the default assumption that it failed because of out-of-memory.
Along the way, fix PQsetvalue() so that it reports all failures via
pqInternalNotice(). It already did so in the case of bad field number,
but neglected to report anything for other error causes.
Because of the potential for crashes, this seems like a back-patchable
bug fix, despite the lack of field reports.
Michael Paquier, per a complaint from Igor Korot.
Discussion: https://postgr.es/m/CA+FnnTxyLWyjY1goewmJNxC==HQCCF4fKkoCTa9qR36oRAHDPw@mail.gmail.com
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4d wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
The new indent version includes numerous fixes thanks to Piotr Stefaniak.
The main changes visible in this commit are:
* Nicer formatting of function-pointer declarations.
* No longer unexpectedly removes spaces in expressions using casts,
sizeof, or offsetof.
* No longer wants to add a space in "struct structname *varname", as
well as some similar cases for const- or volatile-qualified pointers.
* Declarations using PG_USED_FOR_ASSERTS_ONLY are formatted more nicely.
* Fixes bug where comments following declarations were sometimes placed
with no space separating them from the code.
* Fixes some odd decisions for comments following case labels.
* Fixes some cases where comments following code were indented to less
than the expected column 33.
On the less good side, it now tends to put more whitespace around typedef
names that are not listed in typedefs.list. This might encourage us to
put more effort into typedef name collection; it's not really a bug in
indent itself.
There are more changes coming after this round, having to do with comment
indentation and alignment of lines appearing within parentheses. I wanted
to limit the size of the diffs to something that could be reviewed without
one's eyes completely glazing over, so it seemed better to split up the
changes as much as practical.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us