Fix a bug introduced by c1ec02be1d79. It may happen that the executor
opens indexes on the result relation, but no rows end up being inserted.
Then the index_insert_cleanup still gets executed, but passes down NULL
to the AM callback. The AM callback may not expect this, as is the case
of brininsertcleanup, leading to a crash.
Fixed by only calling the cleanup callback if (ii_AmCache != NULL). This
way the AM can simply assume to only see a valid cache.
Reported-by: Richard Guo
Discussion: https://postgr.es/m/CAMbWs4-w9qC-o9hQox9UHvdVZAYTp8OrPQOKtwbvzWaRejTT=Q@mail.gmail.com
XLogSetAsyncXactLSN(), called at asynchronous commit, would wake up
walwriter every time the LSN advances, but walwriter doesn't actually
do anything unless it has at least 'wal_writer_flush_after' full
blocks of WAL to write. Repeatedly waking up walwriter to do nothing
is a waste of CPU cycles in both walwriter and the backends doing the
wakeups. To fix, apply the same logic in XLogSetAsyncXactLSN() to
decide whether to wake up walwriter, as walwriter uses to determine if
it has any work to do.
In the passing, rename misleadingly named 'flushbytes' local variable
to 'flushblocks'.
Author: Andres Freund, Heikki Linnakangas
Discussion: https://www.postgresql.org/message-id/20231024230929.vsc342baqs7kmbte@awork3.anarazel.de
52e4f0cd4 introduced a bug in pgoutput in which missing values in tuples
were incorrectly filled in with NULL. The problem was the use of
CreateTupleDescCopy where CreateTupleDescCopyConstr was required, as the
former drops the constraints in the tuple description (specifically, the
default value constraint) on the floor.
The bug could result in incorrectness when a table replicated via
`REPLICA IDENTITY FULL` underwent a schema change that added a column
with a default value. The problem is that in such cases updates fill NULL
values in old tuples for missing columns for default values. Then on the
subscriber, we failed to find a matching tuple and missed updating the
required row.
Author: Nikhil Benesch
Reviewed-by: Hou Zhijie, Amit Kapila
Backpatch-through: 15
Discussion: http://postgr.es/m/CAPWqQZTEpZQamYsGMn6ZDRvVywwpVPiKH6OY4KSgA+NmeqFNzA@mail.gmail.com
This patch adds 'stats_since' and 'minmax_stats_since' columns to the
pg_stat_statements view and pg_stat_statements() function. The new min/max
reset mode for the pg_stat_stetments_reset() function is controlled by the
parameter minmax_only.
'stat_since' column is populated with the current timestamp when a new
statement is added to the pg_stat_statements hashtable. It provides clean
information about statistics collection time intervals for each statement.
Besides it can be used by sampling solutions to detect situations when a
statement was evicted and stored again between samples.
Such a sampling solution could derive any pg_stat_statements statistic values
for an interval between two samples with the exception of all min/max
statistics. To address this issue this patch adds the ability to reset
min/max statistics independently of the statement reset using the new
minmax_only parameter of the pg_stat_statements_reset(userid oid, dbid oid,
queryid bigint, minmax_only boolean) function. The timestamp of such reset
is stored in the minmax_stats_since field for each statement.
pg_stat_statements_reset() function now returns the timestamp of a reset as the
result.
Discussion: https://postgr.es/m/flat/72e80e7b160a6eb189df9ef6f068cce3765d37f8.camel%40moonset.ru
Author: Andrei Zubkov
Reviewed-by: Julien Rouhaud, Hayato Kuroda, Yuki Seino, Chengxi Sun
Reviewed-by: Anton Melnikov, Darren Rush, Michael Paquier, Sergei Kornilov
Reviewed-by: Alena Rybakina, Andrei Lepikhov
This is preliminary patch. It adds NOT NULL checking for the result of
pg_stat_statements_reset() function. It is needed for upcoming patch
"Track statement entry timestamp" that will change the result type of
this function to the timestamp of a reset performed.
Discussion: https://postgr.es/m/flat/72e80e7b160a6eb189df9ef6f068cce3765d37f8.camel%40moonset.ru
Author: Andrei Zubkov
Reviewed-by: Julien Rouhaud, Hayato Kuroda, Yuki Seino, Chengxi Sun
Reviewed-by: Anton Melnikov, Darren Rush, Michael Paquier, Sergei Kornilov
Reviewed-by: Alena Rybakina, Andrei Lepikhov
The libpq code in charge of creating per-connection SSL objects was
prone to a race condition when loading the custom BIO methods needed by
my_SSL_set_fd(). As BIO methods are stored as a static variable, the
initialization of a connection could fail because it could be possible
to have one thread refer to my_bio_methods while it is being manipulated
by a second concurrent thread.
This error has been introduced by 8bb14cdd33de, that has removed
ssl_config_mutex around the call of my_SSL_set_fd(), that itself sets
the custom BIO methods used in libpq. Like previously, the BIO method
initialization is now protected by the existing ssl_config_mutex, itself
initialized earlier for WIN32.
While on it, document that my_bio_methods is protected by
ssl_config_mutex, as this can be easy to miss.
Reported-by: Willi Mann
Author: Willi Mann, Michael Paquier
Discussion: https://postgr.es/m/e77abc4c-4d03-4058-a9d7-ef0035657e04@celonis.com
Backpatch-through: 12
Values corresponding to STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM and
STATISTIC_KIND_BOUNDS_HISTOGRAM were not exposed to pg_stats when these
slot kinds were introduced in 918eee0c49.
This commit adds the missing fields to pg_stats.
Catversion is bumped.
Discussion: https://postgr.es/m/flat/b67d8b57-9357-7e82-a2e7-f6ce6eaeec67@postgrespro.ru
Author: Egor Rogov, Soumyadeep Chakraborty
Reviewed-by: Tomas Vondra, Justin Pryzby, Jian He
Make a reminder that pg_stats view needs to be modified whenever a new slot
kind is added. To prevent situations like 918eee0c49 when pg_stats was
forgotten to be updated.
Also, revise the comment that only non-null, non-empty rows are considered
for the range length histogram.
Discussion: https://postgr.es/m/flat/b67d8b57-9357-7e82-a2e7-f6ce6eaeec67@postgrespro.ru
Author: Egor Rogov, Soumyadeep Chakraborty
Reviewed-by: Tomas Vondra, Justin Pryzby, Jian He
The brininsert code used to initialize (and destroy) BrinDesc and
BrinRevmap for each tuple, which is not free. This patch initializes
these structures only once, and reuses them for all inserts in the same
command. The data is passed through indexInfo->ii_AmCache.
This also introduces an optional AM callback "aminsertcleanup" that
allows performing custom cleanup in case simply pfree-ing ii_AmCache is
not sufficient (which is the case when the cache contains TupleDesc,
Buffers, and so on).
Author: Soumyadeep Chakraborty
Reviewed-by: Alvaro Herrera, Matthias van de Meent, Tomas Vondra
Discussion: https://postgr.es/m/CAE-ML%2B9r2%3DaO1wwji1sBN9gvPz2xRAtFUGfnffpd0ZqyuzjamA%40mail.gmail.com
Explain where pqsignal() came from, what problem it originally solved
without assuming the reader is familiar with historical Unixen, why we
still need it, what it does for us now, and the key differences in
frontend code on Windows.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/CA%2BhUKG%2BRst1h3uo%2BXRgdRVnWHBa4mmj5gFbmCzZr73s-Fh_5JA%40mail.gmail.com
When using GSSAPI encryption in non-blocking mode, libpq sometimes
failed with "GSSAPI caller failed to retransmit all data needing
to be retried". The cause is that pqPutMsgEnd rounds its transmit
request down to an even multiple of 8K, and sometimes that can lead
to not requesting a write of data that was requested to be written
(but reported as not written) earlier. That can upset pg_GSS_write's
logic for dealing with not-yet-written data, since it's possible
the data in question had already been incorporated into an encrypted
packet that we weren't able to send during the previous call.
We could fix this with a one-or-two-line hack to disable pqPutMsgEnd's
round-down behavior, but that seems like making the caller work around
a behavior that pg_GSS_write shouldn't expose in this way. Instead,
adjust pg_GSS_write to never report a partial write: it either
reports a complete write, or reflects the failure of the lower-level
pqsecure_raw_write call. The requirement still exists for the caller
to present at least as much data as on the previous call, but with
the caller-visible write start point not moving there is no temptation
for it to present less. We lose some ability to reclaim buffer space
early, but I doubt that that will make much difference in practice.
This also gets rid of a rather dubious assumption that "any
interesting failure condition (from pqsecure_raw_write) will recur
on the next try". We've not seen failure reports traceable to that,
but I've never trusted it particularly and am glad to remove it.
Make the same adjustments to the equivalent backend routine
be_gssapi_write(). It is probable that there's no bug on the backend
side, since we don't have a notion of nonblock mode there; but we
should keep the logic the same to ease future maintenance.
Per bug #18210 from Lars Kanis. Back-patch to all supported branches.
Discussion: https://postgr.es/m/18210-4c6d0b14627f2eb8@postgresql.org
A WaitEventSet holds file descriptors or event handles (on Windows).
If FreeWaitEventSet is not called, those fds or handles are leaked.
Use ResourceOwners to track WaitEventSets, to clean those up
automatically on error.
This was a live bug in async Append nodes, if a FDW's
ForeignAsyncRequest function failed. (In back branches, I will apply a
more localized fix for that based on PG_TRY-PG_FINALLY.)
The added test doesn't check for leaking resources, so it passed even
before this commit. But at least it covers the code path.
In the passing, fix misleading comment on what the 'nevents' argument
to WaitEventSetWait means.
Report by Alexander Lakhin, analysis and suggestion for the fix by
Tom Lane. Fixes bug #17828.
Reviewed-by: Alexander Lakhin, Thomas Munro
Discussion: https://www.postgresql.org/message-id/472235.1678387869@sss.pgh.pa.us
The copy command formed for initial sync was using parenthesis for tables
with no columns leading to syntax error. This patch avoids adding
parenthesis for such tables.
Reported-by: Justin G
Author: Vignesh C
Reviewed-by: Peter Smith, Amit Kapila
Backpatch-through: 15
Discussion: http://postgr.es/m/18203-df37fe354b626670@postgresql.org
In replorigin_session_setup(), we were needlessly looping for
max_replication_slots even after finding an existing slot for the origin.
This shouldn't hurt us much except for probably large values of
max_replication_slots.
Author: Antonin Houska
Discussion: http://postgr.es/m/2694.1700471273@antos
As written, the query checked for an access method of type 's', which is
not an AM type supported in the core code.
Error introduced by 8586bf7ed888. As this query is not checking what it
should, backpatch all the way down.
Reviewed-by: Aleksander Alekseev
Discussion: https://postgr.es/m/ZVxJkAJrKbfHETiy@paquier.xyz
Backpatch-through: 12
The original code would miscalculate the total number of cells when the
table to print has more than ~4 billion cells, leading to an unnecessary
error. Repair by changing some computations to be 64-bits wide. Add
some necessary overflow checks.
Author: Hongxu Ma <interma@outlook.com>
Discussion: https://postgr.es/m/TYBP286MB0351B057B101C90D7C1239E6B4E2A@TYBP286MB0351.JPNP286.PROD.OUTLOOK.COM
This refactoring reduces the code in charge of creating replication
slots from two "if" block to a single one, making it slightly cleaner.
This change is possible since 1d04a59be31b, that has removed the
intermediate code that existed between the two "if" blocks in charge of
initializing the output message buffer.
Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+PtnJzqKT41Zt8pChRzba=QgCqjtfYvcf84NMj3VFJoKfw@mail.gmail.com
This commit log messages (at LOG level when log_replication_commands is
set, otherwise at DEBUG1 level) when walsenders acquire and release
replication slots. These messages help to know the lifetime of a
replication slot - one can know how long a streaming standby, logical
subscriber, or replication slot consumer is down. These messages will be
useful on production servers to debug and analyze inactive replication
slots.
Note that these messages are emitted only for walsenders but not for
backends. This is because walsenders are the ones that typically hold
replication slots for longer durations, unlike backends which hold them
for executing replication related functions.
Author: Bharath Rupireddy
Reviewed-by: Peter Smith, Amit Kapila, Alvaro Herrera
Discussion: http://postgr.es/m/CALj2ACX17G7F-jeLt+7KhJ6YxVeRwR8Zk0rDh4VnT546o0UpTQ@mail.gmail.com
Currently important build targets are somewhat hard to discover. This commit
documents important meson build targets in the sgml documentation. But it's
awkward to have to lookup build targets in the docs when hacking, so this also
adds a 'help' target, printing out the same information. To avoid having to
duplicate information in two places, generate both docbook and interactive
docs from a single source.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/20231108232121.ww542mt6lfo6f26f@awork3.anarazel.de
This undoes the change in what the 'docs' target builds 969509c3f2e. Tom was
concerned with having a target to just build the html docs, which a prior
commit now provided explicitly.
A subsequent commit will overhaul the documentation for the documentation
targets.
While at it, move all target in doc/src/sgml/Makefile up to just after the
default "html" target, and add a comment explaining "all" is *not* the default
target.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/20230209203855.njrepiupc3rmehfw@awork3.anarazel.de
Discussion: https://postgr.es/m/20231103163848.26egkh5qdgw3vmil@awork3.anarazel.de
We have toplevel html, man targets in the autoconf build as well. It'd be odd
to have an 'html' target but have the install target be 'install-doc-html',
thus rename the install targets to match.
Reviewed-by: Christoph Berg <myon@debian.org>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/20231103163848.26egkh5qdgw3vmil@awork3.anarazel.de
Recent commit f26c2368dc introduced a search_path cache, but left some
potential out-of-memory hazards. Simplify the code and make it safer
against OOM.
This change reintroduces one list_copy(), losing a small amount of the
performance gained in f26c2368dc. A future change may optimize away
the list_copy() again if it can be done in a safer way.
Discussion: https://postgr.es/m/e6fded24cb8a2c53d4ef069d9f69cc7baaafe9ef.camel@j-davis.com
As coded, the start block calculated by BufFileAppend() would overflow
once more than 16k files are used with a default block size. This issue
existed before b1e5c9fa9ac4, but there's no reason not to be clean about
it.
Per report from Coverity, with a fix suggested by Tom Lane.
The DROP STATISTICS code failed to properly lock the table, leading to
ERROR: tuple concurrently deleted
when executed concurrently with ANALYZE.
Fixed by modifying RemoveStatisticsById() to acquire the same lock as
ANALYZE. This function is called only by DROP STATISTICS, as ANALYZE
calls RemoveStatisticsDataById() directly.
Reported by Justin Pryzby, fix by me. Backpatch through 12. The code was
like this since it was introduced in 10, but older releases are EOL.
Reported-by: Justin Pryzby
Reviewed-by: Tom Lane
Backpatch-through: 12
Discussion: https://postgr.es/m/ZUuk-8CfbYeq6g_u@pryzbyj2023
Commits 146604ec43 and a898b409f6 added overflow checks to
interval_mul(), but not to interval_div(), which contains almost
identical code, and so is susceptible to the same kinds of
overflows. In addition, those checks did not catch all possible
overflow conditions.
Add additional checks to the "cascade down" code in interval_mul(),
and copy all the overflow checks over to the corresponding code in
interval_div(), so that they both generate "interval out of range"
errors, rather than returning bogus results.
Given that these errors are relatively easy to hit, back-patch to all
supported branches.
Per bug #18200 from Alexander Lakhin, and subsequent investigation.
Discussion: https://postgr.es/m/18200-5ea288c7b2d504b1%40postgresql.org
On the MERGE page, the description of the privileges required could be
taken to imply that the SELECT privilege is required on all columns of
the data source, whereas actually it is only required on the columns
referred to by conditions or expressions in the MERGE command. Re-word
it to make that a little clearer, and mention expressions as well as
conditions.
Also, add a glossary entry for MERGE, and nearby on the glossary page,
mention MERGE in the list of commands that cannot update a
materialized view.
Noted by Jian He. Patch by me, reviewed by Jian He.
Discussion: https://postgr.es/m/CACJufxHuSoRXKwr0MtSFLXuT2nFVWcVfEWhxg7qdP9h%2Bs3a%2BUw%40mail.gmail.com
Compute size first, then allocate, then update the structure.
Previously, an out-of-memory when growing could leave the hashtable in
an inconsistent state.
Discussion: https://postgr.es/m/20231117201334.eyb542qr5yk4gilq@awork3.anarazel.de
Reviewed-by: Andres Freund
Reviewed-by: Gurjeet Singh