plpgsql has always executed the query given in a RETURN QUERY command
by opening it as a cursor and then fetching a few rows at a time,
which it turns around and dumps into the function's result tuplestore.
The point of this was to keep from blowing out memory with an oversized
SPITupleTable result (note that while a tuplestore can spill tuples
to disk, SPITupleTable cannot). However, it's rather inefficient, both
because of extra data copying and because of executor entry/exit
overhead. In recent versions, a new performance problem has emerged:
use of a cursor prevents use of a parallel plan for the executed query.
We can improve matters by skipping use of a cursor and having the
executor push result tuples directly into the function's result
tuplestore. However, a moderate amount of new infrastructure is needed
to make that idea work:
* We can use the existing tstoreReceiver.c DestReceiver code to funnel
executor output to the tuplestore, but it has to be extended to support
plpgsql's requirement for possibly applying a tuple conversion map.
* SPI needs to be extended to allow use of a caller-supplied
DestReceiver instead of its usual receiver that puts tuples into
a SPITupleTable. Two new API calls are needed to handle both the
RETURN QUERY and RETURN QUERY EXECUTE cases.
I also felt that I didn't want these new API calls to use the legacy
method of specifying query parameter values with "char" null flags
(the old ' '/'n' convention); rather they should accept ParamListInfo
objects containing the parameter type and value info. This required
a bit of additional new infrastructure since we didn't yet have any
parse analysis callback that would interpret $N parameter symbols
according to type data supplied in a ParamListInfo. There seems to be
no harm in letting makeParamList install that callback by default,
rather than leaving a new ParamListInfo's parserSetup hook as NULL.
(Indeed, as of HEAD, I couldn't find anyplace that was using the
parserSetup field at all; plpgsql was using parserSetupArg for its
own purposes, but parserSetup seemed to be write-only.)
We can actually get plpgsql out of the business of using legacy null
flags altogether, and using ParamListInfo instead of its ad-hoc
PreparedParamsData structure; but this requires inventing one more
SPI API call that can replace SPI_cursor_open_with_args. That seems
worth doing, though.
SPI_execute_with_args and SPI_cursor_open_with_args are now unused
anywhere in the core PG distribution. Perhaps someday we could
deprecate/remove them. But cleaning up the crufty bits of the SPI
API is a task for a different patch.
Per bug #16040 from Jeremy Smith. This is unfortunately too invasive to
consider back-patching. Patch by me; thanks to Hamid Akhtar for review.
Discussion: https://postgr.es/m/16040-eaacad11fecfb198@postgresql.org
Rewrite the documentation of these functions, in light of recent bug fix
commit 5940ffb2.
Back-patch to 13 where the check-for-conflict-out code was split up into
AM-specific and generic parts, and new documentation was added that now
looked wrong.
Reviewed-by: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/db7b729d-0226-d162-a126-8a8ab2dc4443%40jepsen.io
When merging two NumericAggStates, the code missed adding the new
state's NaNcount unless its N was also nonzero; since those counts
are independent, this is wrong.
This would only have visible effect if some partial aggregate scans
found only NaNs while earlier ones found only non-NaNs; then we could
end up falsely deciding that there were no NaNs and fail to return a
NaN final result as expected. That's pretty improbable, so it's no
surprise this hasn't been reported from the field. Still, it's a bug.
I didn't try to produce a regression test that would show the bug,
but I did notice that these functions weren't being reached at all
in our regression tests, so I improved the tests to at least
exercise them. With these additions, I see pretty complete code
coverage on the aggregation-related functions in numeric.c.
Back-patch to 9.6 where this code was introduced. (I only added
the improved test case as far back as v10, though, since the
relevant part of aggregates.sql isn't there at all in 9.6.)
Eliminate enable_groupingsets_hash_disk, which was primarily useful
for testing grouping sets that use HashAgg and spill. Instead, hack
the table stats to convince the planner to choose hashed aggregation
for grouping sets that will spill to disk. Suggested by Melanie
Plageman.
Rename enable_hashagg_disk to hashagg_avoid_disk_plan, and invert the
meaning of on/off. The new name indicates more strongly that it only
affects the planner. Also, the word "avoid" is less definite, which
should avoid surprises when HashAgg still needs to use the
disk. Change suggested by Justin Pryzby, though I chose a different
GUC name.
Discussion: https://postgr.es/m/CAAKRu_aisiENMsPM2gC4oUY1hHG3yrCwY-fXUg22C6_MJUwQdA%40mail.gmail.com
Discussion: https://postgr.es/m/20200610021544.GA14879@telsasoft.com
Backpatch-through: 13
SSI's HeapCheckForSerializableConflictOut() test failed to correctly
handle conditions involving a concurrently inserted tuple which is later
concurrently updated by a separate transaction . A SELECT statement
that called HeapCheckForSerializableConflictOut() could end up using the
same XID (updater's XID) for both the original tuple, and the successor
tuple, missing the XID of the xact that created the original tuple
entirely. This only happened when neither tuple from the chain was
visible to the transaction's MVCC snapshot.
The observable symptoms of this bug were subtle. A pair of transactions
could commit, with the later transaction failing to observe the effects
of the earlier transaction (because of the confusion created by the
update to the non-visible row). This bug dates all the way back to
commit dafaa3ef, which added SSI.
To fix, make sure that we check the xmin of concurrently inserted tuples
that happen to also have been updated concurrently.
Author: Peter Geoghegan
Reported-By: Kyle Kingsbury
Reviewed-By: Thomas Munro
Discussion: https://postgr.es/m/db7b729d-0226-d162-a126-8a8ab2dc4443@jepsen.io
Backpatch: All supported versions
Remove some code relevant only for dumping from pre-7.1 servers,
support for which had already been removed by
64f3524e2c8deebc02808aa5ebdfa17859473add.
fe_archive.c was compiled only for the frontend in src/common/, but as
it will never share anything with the backend, it makes most sense to
move this file to src/fe_utils/.
Reported-by: Peter Eisentraut
Discussion: https://postgr.es/m/e9766d71-8655-ac86-bdf6-77e0e7169977@2ndquadrant.com
Backpatch-through: 13
access_method, database_name, and index_name are all just name, and
they are not used consistently for their alleged purpose, so remove
them. They have been around since ancient times but have no current
reason for existing. Removing them can simplify future grammar
refactoring.
Discussion: https://www.postgresql.org/message-id/flat/163c00a5-f634-ca52-fc7c-0e53deda8735%402ndquadrant.com
The previous description string still described the pre-PostgreSQL
10 (pre eb61136dc75a76caef8460fa939244d8593100f2) behavior of
selecting between encrypted and unencrypted, but it is now choosing
between encryption algorithms.
Commit cec2edfa78 introduced logical_decoding_work_mem to limit
ReorderBuffer memory usage. We spill the changes once the memory occupied
by changes exceeds logical_decoding_work_mem. There was an assumption
in the code that by evicting the largest (sub)transaction we will come
under the memory limit as the selected transaction will be at least as
large as the most recent change (which caused us to go over the memory
limit). However, that is not true because a user can reduce the
logical_decoding_work_mem to a smaller value before the most recent
change.
We fix it by allowing to evict the transactions until we reach under the
memory limit.
Reported-by: Fujii Masao
Author: Amit Kapila
Reviewed-by: Fujii Masao
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/2b7ba291-22e0-a187-d167-9e5309a3458d@oss.nttdata.com
There are a number of Remove${Something}ById() functions that are
essentially identical in structure and only different in which catalog
they are working on. Refactor this to be one generic function. The
information about which oid column, index, etc. to use was already
available in ObjectProperty for most catalogs, in a few cases it was
easily added.
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/331d9661-1743-857f-1cbb-d5728bcd62cb%402ndquadrant.com
Commit 0c882e52a tried to force table atest12 to have more-accurate-
than-default statistics; but transiently setting default_statistics_target
isn't enough for that, because autovacuum could come along and overwrite
the stats later. This evidently explains some intermittent buildfarm
failures we've seen since then. Repair by disabling autovac on this table.
Thanks to David Rowley for correctly diagnosing the cause.
Discussion: https://postgr.es/m/CA+hUKG+OUkQSOUTg=qo=S=fWa_tbm99i7rB7mfbHz1SYm4v-jQ@mail.gmail.com
Previously we used pg_atomic_write_64_impl inside
pg_atomic_init_u64. That works correctly, but on platforms without
64bit single copy atomicity it could trigger spurious valgrind errors
about uninitialized memory, because we use compare_and_swap for atomic
writes on such platforms.
I previously suppressed one instance of this problem (6c878edc1df),
but as Tom reports that wasn't enough. As the atomic variable cannot
yet be concurrently accessible during initialization, it seems better
to have pg_atomic_init_64_impl set the value directly.
Change pg_atomic_init_u32_impl for symmetry.
Reported-By: Tom Lane
Author: Andres Freund
Discussion: https://postgr.es/m/1714601.1591503815@sss.pgh.pa.us
Backpatch: 9.5-
The redo routines for XLOG_CHECKPOINT_{ONLINE,SHUTDOWN} must acquire
ControlFileLock before modifying ControlFile->checkPointCopy, or the
checkpointer could write out a control file with a bad checksum.
Likewise, XLogReportParameters() must acquire ControlFileLock before
modifying ControlFile and calling UpdateControlFile().
Back-patch to all supported releases.
Author: Nathan Bossart <bossartn@amazon.com>
Author: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/70BF24D6-DC51-443F-B55A-95735803842A%40amazon.com
In PostgreSQL 10, we stopped using System V semaphores on Linux
systems. Update the example we give of an error message from a
misconfigured system to show what people are most likely to see these
days.
Back-patch to 10, where PREFERRED_SEMAPHORES=UNNAMED_POSIX arrived.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CA%2BhUKGLmJUSwybaPQv39rB8ABpqJq84im2UjZvyUY4feYhpWMw%40mail.gmail.com
Since database connections can be used with WAL senders in 9.4, it is
possible to use physical replication. This commit fixes a crash when
starting physical replication with a WAL sender using a database
connection, caused by the refactoring done in 850196b.
There have been discussions about forbidding the use of physical
replication in a database connection, but this is left for later,
taking care only of the crash new to 13.
While on it, add a test to check for a failure when attempting logical
replication if the WAL sender does not have a database connection. This
part is extracted from a larger patch by Kyotaro Horiguchi.
Reported-by: Vladimir Sitnikov
Author: Michael Paquier, Kyotaro Horiguchi
Reviewed-by: Kyotaro Horiguchi, Álvaro Herrera
Discussion: https://postgr.es/m/CAB=Je-GOWMj1PTPkeUhjqQp-4W3=nW-pXe2Hjax6rJFffB5_Aw@mail.gmail.com
Backpatch-through: 13
Commit 7be5d8df1f74b78620167d3abf32ee607e728919 surfaced the logic
error, which had no functional implications, by adding "use warnings".
The buildfarm always customizes PROVE_FLAGS, so the warning did not
appear there. Back-patch to 9.5 (all supported versions).
Even when we've concluded that we have a hard write failure on the
socket, we should continue to try to read data. This gives us an
opportunity to collect any final error message that the backend might
have sent before closing the connection; moreover it is the job of
pqReadData not pqSendSome to close the socket once EOF is detected.
Due to an oversight in 1f39a1c06, pqSendSome failed to try to collect
data in the case where we'd already set write_failed. The problem was
masked for ordinary query operations (which really only make one write
attempt anyway), but COPY to the server would continue to send data
indefinitely after a mid-COPY connection loss.
Hence, add pqReadData calls into the paths where pqSendSome drops data
because of write_failed. If we've lost the connection, this will
eventually result in closing the socket and setting CONNECTION_BAD,
which will cause PQputline and siblings to report failure, allowing
the application to terminate the COPY sooner. (Basically this restores
what happened before 1f39a1c06.)
There are related issues that this does not solve; for example, if the
backend sends an error but doesn't drop the connection, we did and
still will keep pumping COPY data as long as the application sends it.
Fixing that will require application-visible behavior changes though,
and anyway it's an ancient behavior that we've had few complaints about.
For now I'm just trying to fix the regression from 1f39a1c06.
Per a complaint from Andres Freund. Back-patch into v12 where
1f39a1c06 came in.
Discussion: https://postgr.es/m/20200603201242.ofvm4jztpqytwfye@alap3.anarazel.de
As it stands, this flag is only set when we've successfully sent a
cancel request, not if we get SIGINT and then fail to send a cancel.
However, for almost all callers, that's the Wrong Thing: we'd prefer
to abort processing after control-C even if no cancel could be sent.
As an example, since commit 1d468b9ad "pgbench -i" fails to give up
sending COPY data even after control-C, if the postmaster has been
stopped, which is clearly not what the code intends and not what anyone
would want. (The fact that it keeps going at all is the fault of a
separate bug in libpq, but not letting CancelRequested become set is
clearly not what we want here.)
The sole exception, as far as I can find, is that scripts_parallel.c's
ParallelSlotsGetIdle tries to consume a query result after issuing a
cancel, which of course might not terminate quickly if no cancel
happened. But that behavior was poorly thought out too. No user of
ParallelSlotsGetIdle tries to continue processing after a cancel,
so there is really no point in trying to clear the connection's state.
Moreover this has the same defect as for other users of cancel.c,
that if the cancel request fails for some reason then we end up with
control-C being completely ignored. (On top of that, select_loop failed
to distinguish clearly between SIGINT and other reasons for select(2)
failing, which means that it's possible that the existing code would
think that a cancel has been sent when it hasn't.)
Hence, redefine CancelRequested as simply meaning that SIGINT was
received. We could add a second flag with the other meaning, but
in the absence of any compelling argument why such a flag is needed,
I think it would just offer an opportunity for future callers to
get it wrong. Also remove the consumeQueryResult call in
ParallelSlotsGetIdle's failure exit. In passing, simplify the
API of select_loop.
It would now be possible to re-unify psql's cancel_pressed with
CancelRequested, partly undoing 5d43c3c54. But I'm not really
convinced that that's worth the trouble, so I left psql alone,
other than fixing a misleading comment.
This code is new in v13 (cf a4fd3aa71), so no need for back-patch.
Per investigation of a complaint from Andres Freund.
Discussion: https://postgr.es/m/20200603201242.ofvm4jztpqytwfye@alap3.anarazel.de
Commit 24d85952 made a change that indirectly caused a performance
regression by triggering a change in the way GCC optimizes memcpy() on
some platforms.
The behavior seemed to contradict a GCC document, so I filed a report:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95556
This patch implements a narrow workaround which eliminates the
regression I observed. The workaround is benign enough that it seems
unlikely to cause a different regression on another platform.
Discussion: https://postgr.es/m/99b2eab335c1592c925d8143979c8e9e81e1575f.camel@j-davis.com
Whitespace between tags is significant, and in some cases it creates
extra vertical space in man pages. The fix is either to remove some
newlines or in some cases to reword slightly to avoid the awkward
markup layout.
Remove obsolete instructions for old operating system versions, and
update the text to reflect the defaults on modern systems.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Reviewed-by: Magnus Hagander <magnus@hagander.net>
Discussion: https://postgr.es/m/CA%2BhUKGLmJUSwybaPQv39rB8ABpqJq84im2UjZvyUY4feYhpWMw%40mail.gmail.com