1
0
mirror of https://github.com/postgres/postgres.git synced 2025-06-26 12:21:12 +03:00
Commit Graph

61535 Commits

Author SHA1 Message Date
7d4667c620 Revert "postgres_fdw: Inherit the local transaction's access/deferrable modes."
We concluded that commit e5a3c9d9b is a feature rather than a fix; since
it was added after feature freeze, revert it.

Reported-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reported-by: Michael Paquier <michael@paquier.xyz>
Reported-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/ed2296f1-1a6b-4932-b870-5bb18c2591ae%40oss.nttdata.com
2025-06-08 17:30:00 +09:00
73e26cbeb5 doc PG 18 relnotes: add AFTER trigger user change item
Reported-by: Noah Misch

Discussion: https://postgr.es/m/20250603172123.5f.nmisch@google.com
2025-06-07 11:25:17 -04:00
37e5f0b61f doc PG 18 relnotes: adjust wording of initdb item 48814415d5
And move to the top of the incompatibility list.  This will impact users
more than any other incompatibility item because of pg_upgrade.
2025-06-07 11:06:47 -04:00
1a857348e4 plpython: Remove obsolete test expected file
Move plpython_error_5.out to plpython_error.out, since the pre-3.5
version is no longer needed, since we raised the Python requirement to
3.6 (commit 45363fca63).

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/d620e7c6-becc-4a8e-9b43-eea0da55faf2@eisentraut.org
2025-06-07 09:04:29 +02:00
5b40feab59 Improve CREATE DATABASE error message for invalid libc locale.
Discussion: https://postgr.es/m/73959a14-267b-49c1-8293-291b175682cb@manitou-mail.org
Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
2025-06-06 15:28:51 -07:00
a31767fc09 Use NULL instead of 0 for pointer arguments.
Commit 5fe08c006c fixed this for calls to dshash_create().  This
commit fixes calls to dshash_attach() and dsa_create_in_place().

Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/aECi_gSD9JnVWQ8T%40nathan
2025-06-06 12:08:17 -05:00
304862973e Fixed signed/unsigned mismatch in test_dsm_registry.
Oversight in commit 8b2bcf3f28.

Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/aECi_gSD9JnVWQ8T%40nathan
Backpatch-through: 17
2025-06-06 11:40:52 -05:00
e6eed40e44 Avoid BufferGetLSNAtomic() calls during nbtree scans.
Delay calling BufferGetLSNAtomic() until we finish reading a page that
actually contains items that btgettuple will return to the executor.
This reduces the number of calls during plain index scans (we'll only
call BufferGetLSNAtomic() when _bt_readpage returns true), and totally
eliminates calls during index-only scans, bitmap index scans, and plain
index scans of an unlogged relation.

Currently, when checksums (or wal_log_hints) are enabled, acquiring a
page's LSN in BufferGetLSNAtomic() involves locking the buffer header
(which involves the use of spinlocks).  Testing has shown that enabling
page-level checksums causes large regressions with certain workloads,
especially on larger multi-socket systems.

The regression isn't tied to any Postgres 18 commit.  However, Postgres
18 commit 04bec894 made initdb use checksums by default, so it seems
prudent to address the problem now.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Tomas Vondra <tomas@vondra.me>
Discussion: https://postgr.es/m/941f0190-e3c6-4622-9ac7-c04e936e5fdb@vondra.me
Discussion: https://postgr.es/m/CAH2-Wzk-Dg5XWs_jDuiHt4_7ryrSY+n=vxmHY51EVqPDFsKXmg@mail.gmail.com
2025-06-06 10:19:44 -04:00
016e407f4b pg_prewarm: Allow autoprewarm to use more than 1GB to dump blocks.
Reported-by: Daria Shanina <vilensipkdm@gmail.com>
Author: Daria Shanina <vilensipkdm@gmail.com>
Author: Robert Haas <robertmhaas@gmail.com>
Backpatch-through: 13
2025-06-06 08:18:27 -04:00
c37be39a74 Doc: improve description of which role runs a trigger.
Refine wording from commit 01463e1cc.

Author: Noah Misch <noah@leadboat.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20250605163441.2f.nmisch@google.com
2025-06-05 15:24:15 -04:00
54c6ea8c81 nbtree: Remove useless row compare arg.
Use of a RowCompare key makes nbtree index scans ineligible to use
pstate.forcenonrequired following recent bugfix commit 5f4d98d4.
There's no longer any need for _bt_check_rowcompare to accept a
forcenonrequired argument, so remove it.
2025-06-05 14:50:43 -04:00
e6f98d8848 Avoid bogus scans of partitions when marking FKs enforced
Similar to commit cc733ed164: when an unenforced foreign key that
references a partitioned table is altered to be enforced, we scan
the constrained table based on each partition on the referenced
partitioned table.  This is bogus and likely to cause the ALTER TABLE to
fail: we must only scan the constrained table as pointing to the
top-level partitioned table.  Oversight in commit eec0040c4b.  Fix by
eliding those scans.

Author: Amul Sul <sulamul@gmail.com>
Reported-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxF1e_gPOLtsDoaE4VCgQPC8KZW_kPAjPR5Rvv4Ew=fb2A@mail.gmail.com
2025-06-05 18:39:06 +02:00
04acad82b0 Doc: you must own the target object to use SECURITY LABEL.
For some reason this wasn't mentioned before.

Author: Patrick Stählin <me@packi.ch>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/931e012a-57ba-41ba-9b88-24323a46dec5@packi.ch
Backpatch-through: 13
2025-06-05 11:30:12 -04:00
cc733ed164 Avoid bogus scans of partitions when validating FKs to partitioned tables
Validating an unvalidated foreign key that references a partitioned
table would try to queue validations for each individual partition of
the referenced table, but this is wrong: each individual partition would
not necessarily have all the referenced rows, so errors would be raised.
Avoid doing that.  The pg_constraint rows that cause this to happen are
only there to support the action triggers that implement the DELETE/
UPDATE actions of the FK, so no validating scan is necessary.

This was an oversight in commit b663b9436e.

An equivalent oversight exists for NOT ENFORCED constraints, which is
not fixed in this commit.

Author: Amul Sul <sulamul@gmail.com>
Reported-by: Antonin Houska <ah@cybertec.at>
Reviewed-by: jian he <jian.universality@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Discussion: https://postgr.es/m/26983.1748418675@localhost
2025-06-05 17:17:13 +02:00
4b05ebf095 Change role names used in trigger test.
The choices made in commit 01463e1cc might pose copyright hazards,
and are more cutesy than informative anyway.

Reported-by: Noah Misch <noah@leadboat.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20250415155850.9b.nmisch@google.com
2025-06-05 11:05:53 -04:00
112e40b867 psql: fix order of join clauses when listing extensions
Commit d696406a9b added a new join to the query for extensions, but did
so in the wrong place, causing the AND clause to be applied to the wrong
join.

Author:	Suraj Kharage <suraj.kharage@enterprisedb.com>
Reviewed-By: Dilip Kumar <dilipbalaut@gmail.com>
Discussion: https://postgr.es/m/CAF1DzPVBrN-cmPB2zb7ZU=2J4vEF2fNdArGCG9w+9fnKq4v8tg@mail.gmail.com
2025-06-05 09:54:16 +02:00
b87163e5f3 Fix copy-pasto with process count calculation in method_io_uring.c
This commit replaces the formula used for "TotalProcs" with a call to
pgaio_uring_procs() in pgaio_uring_shmem_init() for the shared memory
initialization, which is exactly the same, removing a duplication.

pgaio_uring_procs() is used for shared memory sizing and a sanity check,
and it has some documentation explaining some reasoning behind the
formula.

Author: Japin Li <japinli@hotmail.com>
Discussion: https://postgr.es/m/ME0P300MB044521067A1EDDA9EDEC3793B66DA@ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM
2025-06-05 09:39:24 +09:00
f9b1192190 doc: Remove notes about "unencrypted" passwords.
The documentation for the pg_authid system catalog and the
pg_shadow system view indicates that passwords might be stored in
cleartext, but that hasn't been possible for some time.

Oversight in commit eb61136dc7.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/aD2yKkZro4nbl5ol%40nathan
Backpatch-through: 13
2025-06-04 09:47:25 -05:00
30c15987d9 doc: Update description of pg_constraint.convalidated
The previous description listed the constraint types that this column
was used for, but that was outdated, since not-valid not-null
constraints are now possible.  So just remove that qualification,
rather than trying to keep it updated.

Author: jian he <jian.universality@gmail.com>
Reviewed-by: Robert Treat <rob@xzilla.net>
Discussion: https://www.postgresql.org/message-id/flat/CACJufxFo4yTwzbSZrP%2BzQiR6_M00skoZMFaUnNJCdY6he%3DuQfA%40mail.gmail.com
2025-06-04 15:27:44 +02:00
48814415d5 doc PG 18 relnotes: Add incompatibility note about checksums now default
Reviewed-by: Tomas Vondra <tomas@vondra.me>
Discussion: https://www.postgresql.org/message-id/flat/CAKAnmmKwiMHik5AHmBEdf5vqzbOBbcwEPHo4-PioWeAbzwcTOQ%40mail.gmail.com
2025-06-04 12:06:08 +02:00
f777d77387 Don't strip $libdir from LOAD command
Commit 4f7f7b0375 implemented the extension_control_path GUC, and to
make it work it was decided that we should strip the $libdir/ on
module_pathname from .control files, so that extensions don't need to
worry about this change.

This strip logic was implemented on expand_dynamic_library_name()
which works fine when executing the SQL functions from extensions, but
this function is also called when the LOAD command is executed, and
since the user may explicitly pass the $libdir prefix on LOAD
parameter, we should not strip in this case.

This commit fixes this issue by moving the strip logic from
expand_dynamic_library_name() to load_external_function() that is
called when the running the SQL script from extensions.

Reported-by: Evan Si <evsi@amazon.com>
Author: Matheus Alcantara <matheusssilv97@gmail.com>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Rahila Syed <rahilasyed90@gmail.com>
Bug: #18920
Discussion: https://www.postgresql.org/message-id/flat/18920-b350b1c0a30af006%40postgresql.org
2025-06-04 11:38:12 +02:00
7f3381c7ee psql: Abort connection when using \syncpipeline after COPY TO/FROM
When the backend reads COPY data, it ignores all sync messages, as per
c01641f8ae.  With psql pipelines, it is possible to manually send sync
messages with \sendpipeline which leaves the frontend in an
unrecoverable state as the backend will not send the necessary
ReadyForQuery message that is expected to feed psql result consumption
logic.

It could be possible to artificially reduce the piped_syncs and
requested_results, however libpq's state would still have queued sync
messages in its command queue, and the only way to consume those without
directly calling pqCommandQueueAdvance() is to process ReadyForQuery
messages that won't be sent since the backend ignores these.  Perhaps
this could be improved in the future, but I am not really excited about
introducing this amount of complications in libpq to manipulate the
message queues without a better use case to support it.

Hence, this patch aborts the connection if we detect excessive sync
messages after a COPY in a pipeline to avoid staying in an inconsistent
protocol state, which is the best thing we can do with pipelines in
psql for now.  Note that this change does not prevent wrapping a set
of queries inside a block made of \startpipeline and \endpipeline, only
the use of \syncpipeline for a COPY.

Reported-by: Nikita Kalinin <n.kalinin@postgrespro.ru>
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Discussion: https://postgr.es/m/18944-8a926c30f68387dd@postgresql.org
2025-06-04 09:01:29 +09:00
58fbfde152 Fix incorrect format placeholders 2025-06-03 21:38:04 +02:00
0e164eb9f4 Fix a pg_dump scenario for platforms where SEEK_CUR != 1.
POSIX allows such platforms.  Given the lack of complaints, we may not
currently test on such a platform.  This is new in v18 (commit
7d5c83b4e9), so no back-patch.
2025-06-03 11:18:52 -07:00
73bdcfab35 Rename log_lock_failure GUC to log_lock_failures for consistency.
This commit renames the GUC log_lock_failure to log_lock_failures
to align with the existing similar setting log_lock_waits, which uses
the plural form. This improves naming consistency across related GUCs.

Suggested-by: Peter Eisentraut <peter@eisentraut.org>
Author: Fujii Masao <masao.fujii@gmail.com
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/7a8198b6-d5b8-4910-b41e-8d3efcbb015d@eisentraut.org
2025-06-03 10:02:55 +09:00
aa87f69c00 Disallow "=" in names of reloptions and foreign-data options.
We store values for these options as array elements with the syntax
"name=value", hence a name containing "=" confuses matters when
it's time to read the array back in.  Since validation of the
options is often done (long) after this conversion to array format,
that leads to confusing and off-point error messages.  We can
improve matters by rejecting names containing "=" up-front.

(Probably a better design would have involved pairs of array
elements, but it's too late now --- and anyway, there's no
evident use-case for option names like this.  We already
reject such names in some other contexts such as GUCs.)

Reported-by: Chapman Flack <jcflack@acm.org>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Chapman Flack <jcflack@acm.org>
Discussion: https://postgr.es/m/6830EB30.8090904@acm.org
Backpatch-through: 13
2025-06-02 15:22:44 -04:00
31a7e175fd Correct heap vacuum boundary state setup ordering
052026c9b9 mistakenly reordered setup steps in heap_vacuum_rel(),
incorrectly moving RelationGetNumberOfBlocks() before
vacuum_get_cutoffs().

OldestXmin must be determined before RelationGetNumberOfBlocks()
calculates the number of blocks in the relation that will be vacuumed.
Otherwise tuples older than OldestXmin may be inserted into the end of
the relation into blocks that are not vacuumed. If additional tuples
newer than those inserted into unscanned blocks but older than
OldestXmin are inserted into free space earlier in the relation, the
result could be advancing pg_class.relfrozenxid to a newer value than an
unfrozen XID in one of the unscanned heap pages.

Assigning an incorrect relfrozenxid can lead to data loss, so it is
imperative that it correctly reflect the oldest unfrozen xid.

Reported-by: Peter Geoghegan <pg@bowt.ie>
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-WzntqvVEdbbpqG5JqSZGuLWmy4PBfUO-OswfivKchr2gvw%40mail.gmail.com
2025-06-02 10:54:07 -04:00
fc32be3c94 Fix incorrect format placeholders
Fixes for return type of dclist_count().
2025-06-02 10:12:58 +02:00
32edf732e8 Rename gist stratnum support function
Commit 7406ab623f added a gist support function that we internally
refer to by the symbol GIST_STRATNUM_PROC.  This translated from
"well-known" strategy numbers to opfamily-specific strategy numbers.
However, we later (commit 630f9a43ce) changed this to fit into
index-AM-level compare type mapping, so this function actually now
maps from compare type to opfamily-specific strategy numbers.  So this
name is no longer fitting.

Moreover, the index AM level also supports the opposite, a function to
map from strategy number to compare type.  This is currently not
supported in gist, but one might wonder what this function is supposed
to be called when it is added.

This patch changes the naming of the gist-level functionality to be
more in line with the index-AM-level functionality.  This makes sense
because these are essentially the same thing on different levels.
This also changes the names of the externally visible functions that
are provided for use as such a support function.

Reviewed-by: Paul A Jungwirth <pj@illuminatedcomputing.com>
Discussion: https://www.postgresql.org/message-id/37ebb1d9-9036-485f-a215-e55435689917%40eisentraut.org
2025-06-02 08:41:27 +02:00
5231ed8262 Use replay LSN as target for cascading logical WAL senders
A cascading WAL sender doing logical decoding (as known as doing its
work on a standby) has been using as flush LSN the value returned by
GetStandbyFlushRecPtr() (last position safely flushed to disk).  This is
incorrect as such processes are only able to decode changes up to the
LSN that has been replayed by the startup process.

This commit changes cascading logical WAL senders to use the replay LSN,
as returned by GetXLogReplayRecPtr().  This distinction is important
particularly during shutdown, when WAL senders need to send any
remaining available data to their clients, switching WAL senders to a
caught-up state.  Using the latest flush LSN rather than the replay LSN
could cause the WAL senders to be stuck in an infinite loop preventing
them to shut down, as the startup process does not run when WAL senders
attempt to catch up, so they could keep waiting for work that would
never happen.

Backpatch down to v16, where logical decoding on standbys has been
introduced.

Author: Alexey Makhmutov <a.makhmutov@postgrespro.ru>
Reviewed-by: Ajin Cherian <itsajin@gmail.com>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/52138028-7246-421c-9161-4fa108b88070@postgrespro.ru
Backpatch-through: 16
2025-06-02 12:03:59 +09:00
c98975ba85 Add commit 4672b6223 to .git-blame-ignore-revs. 2025-06-01 14:58:42 -04:00
4672b62239 Run pgindent on the previous commit.
Clean up after rearranging PG_TRY blocks.

Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/2954090.1748723636@sss.pgh.pa.us
Backpatch-through: 13
2025-06-01 14:55:24 -04:00
c6f7f11d8f Fix edge-case resource leaks in PL/Python error reporting.
PLy_elog_impl and its subroutine PLy_traceback intended to avoid
leaking any PyObject reference counts, but their coverage of the
matter was sadly incomplete.  In particular, out-of-memory errors
in most of the string-construction subroutines could lead to
reference count leaks, because those calls were outside the
PG_TRY blocks responsible for dropping reference counts.

Fix by (a) adjusting the scopes of the PG_TRY blocks, and
(b) moving the responsibility for releasing the reference counts
of the traceback-stack objects to PLy_elog_impl.  This requires
some additional "volatile" markers, but not too many.

In passing, fix an ancient thinko: use of the "e_module_o" PyObject
was guarded by "if (e_type_s)", where surely "if (e_module_o)"
was meant.  This would only have visible consequences if the
"__name__" attribute were present but the "__module__" attribute
wasn't, which apparently never happens; but someday it might.

Rearranging the PG_TRY blocks requires indenting a fair amount
of code one more tab stop, which I'll do separately for clarity.

Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/2954090.1748723636@sss.pgh.pa.us
Backpatch-through: 13
2025-06-01 14:48:35 -04:00
e5a3c9d9b5 postgres_fdw: Inherit the local transaction's access/deferrable modes.
Previously, postgres_fdw always 1) opened a remote transaction in READ
WRITE mode even when the local transaction was READ ONLY, causing a READ
ONLY transaction using it that references a foreign table mapped to a
remote view executing a volatile function to write in the remote side,
and 2) opened the remote transaction in NOT DEFERRABLE mode even when
the local transaction was DEFERRABLE, causing a SERIALIZABLE READ ONLY
DEFERRABLE transaction using it to abort due to a serialization failure
in the remote side.

To avoid these, modify postgres_fdw to open a remote transaction in the
same access/deferrable modes as the local transaction.  This commit also
modifies it to open a remote subtransaction in the same access mode as
the local subtransaction.

Although these issues exist since the introduction of postgres_fdw,
there have been no reports from the field.  So it seems fine to just fix
them in master only.

Author: Etsuro Fujita <etsuro.fujita@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAPmGK16n_hcUUWuOdmeUS%2Bw4Q6dZvTEDHb%3DOP%3D5JBzo-M3QmpQ%40mail.gmail.com
2025-06-01 17:30:00 +09:00
b006bcd531 Fix MERGE into a plain inheritance parent table.
When a MERGE's target table is the parent of an inheritance tree, any
INSERT actions insert into the parent table using ModifyTableState's
rootResultRelInfo. However, there are two bugs in the way is
initialized:

1. ExecInitMerge() incorrectly uses a different ResultRelInfo entry
from ModifyTableState's resultRelInfo array to build the insert
projection, which may not be compatible with rootResultRelInfo.

2. ExecInitModifyTable() does not fully initialize rootResultRelInfo.
Specifically, ri_WithCheckOptions, ri_WithCheckOptionExprs,
ri_returningList, and ri_projectReturning are not initialized.

This can lead to crashes, or incorrect query results due to failing to
check WCO's or process the RETURNING list for INSERT actions.

Fix both these bugs in ExecInitMerge(), noting that it is only
necessary to fully initialize rootResultRelInfo if the MERGE has
INSERT actions and the target table is a plain inheritance parent.

Backpatch to v15, where MERGE was introduced.

Reported-by: Andres Freund <andres@anarazel.de>
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Jian He <jian.universality@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Discussion: https://postgr.es/m/4rlmjfniiyffp6b3kv4pfy4jw3pciy6mq72rdgnedsnbsx7qe5@j5hlpiwdguvc
Backpatch-through: 15
2025-05-31 12:12:58 +01:00
e050af2868 Change internal plan ID type from uint64 to int64
uint64 was chosen to be consistent with the type used by the query ID,
but the conclusion of a recent discussion for the query ID is that int64
is a better fit as the signed form is shown to the user, for PGSS or
EXPLAIN outputs.

This commit changes the plan ID to use int64, following c3eda50b06
that has done the same for the query ID.

The plan ID is new to v18, introduced in 2a0cd38da5.

Author: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/aCvzJNwetyEI3Sgo@paquier.xyz
2025-05-31 09:40:45 +09:00
706054b11b Ensure we have a snapshot when updating various system catalogs.
A few places that access system catalogs don't set up an active
snapshot before potentially accessing their TOAST tables.  To fix,
push an active snapshot just before each section of code that might
require accessing one of these TOAST tables, and pop it shortly
afterwards.  While at it, this commit adds some rather strict
assertions in an attempt to prevent such issues in the future.

Commit 16bf24e0e4 recently removed pg_replication_origin's TOAST
table in order to fix the same problem for that catalog.  On the
back-branches, those bugs are left in place.  We cannot easily
remove a catalog's TOAST table on released major versions, and only
replication origins with extremely long names are affected.  Given
the low severity of the issue, fixing older versions doesn't seem
worth the trouble of significantly modifying the patch.

Also, on v13 and v14, the aforementioned strict assertions have
been omitted because commit 2776922201, which added
HaveRegisteredOrActiveSnapshot(), was not back-patched.  While we
could probably back-patch it now, I've opted against it because it
seems unlikely that new TOAST snapshot issues will be introduced in
the oldest supported versions.

Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/18127-fe54b6a667f29658%40postgresql.org
Discussion: https://postgr.es/m/18309-c0bf914950c46692%40postgresql.org
Discussion: https://postgr.es/m/ZvMSUPOqUU-VNADN%40nathan
Backpatch-through: 13
2025-05-30 15:17:28 -05:00
232d8caeaa Fix memory leakage in postgres_fdw's DirectModify code path.
postgres_fdw tries to use PG_TRY blocks to ensure that it will
eventually free the PGresult created by the remote modify command.
However, it's fundamentally impossible for this scheme to work
reliably when there's RETURNING data, because the query could fail
in between invocations of postgres_fdw's DirectModify methods.
There is at least one instance of exactly this situation in the
regression tests, and the ensuing session-lifespan leak is visible
under Valgrind.

We can improve matters by using a memory context reset callback
attached to the ExecutorState context.  That ensures that the
PGresult will be freed when the ExecutorState context is torn
down, even if control never reaches postgresEndDirectModify.

I have little faith that there aren't other potential PGresult
leakages in the backend modules that use libpq.  So I think it'd
be a good idea to apply this concept universally by creating
infrastructure that attaches a reset callback to every PGresult
generated in the backend.  However, that seems too invasive for
v18 at this point, let alone the back branches.  So for the
moment, apply this narrow fix that just makes DirectModify safe.
I have a patch in the queue for the more general idea, but it
will have to wait for v19.

Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Discussion: https://postgr.es/m/2976982.1748049023@sss.pgh.pa.us
Backpatch-through: 13
2025-05-30 13:45:41 -04:00
d98cefe114 Allow larger packets during GSSAPI authentication exchange.
Our GSSAPI code only allows packet sizes up to 16kB.  However it
emerges that during authentication, larger packets might be needed;
various authorities suggest 48kB or 64kB as the maximum packet size.
This limitation caused login failure for AD users who belong to many
AD groups.  To add insult to injury, we gave an unintelligible error
message, typically "GSSAPI context establishment error: The routine
must be called again to complete its function: Unknown error".

As noted in code comments, the 16kB packet limit is effectively a
protocol constant once we are doing normal data transmission: the
GSSAPI code splits the data stream at those points, and if we change
the limit then we will have cross-version compatibility problems
due to the receiver's buffer being too small in some combinations.
However, during the authentication exchange the packet sizes are
not determined by us, but by the underlying GSSAPI library.  So we
might as well just try to send what the library tells us to.
An unpatched recipient will fail on a packet larger than 16kB,
but that's not worse than the sender failing without even trying.
So this doesn't introduce any meaningful compatibility problem.

We still need a buffer size limit, but we can easily make it be
64kB rather than 16kB until transport negotiation is complete.
(Larger values were discussed, but don't seem likely to add
anything.)

Reported-by: Chris Gooch <cgooch@bamfunds.com>
Fix-suggested-by: Jacob Champion <jacob.champion@enterprisedb.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/DS0PR22MB5971A9C8A3F44BCC6293C4DABE99A@DS0PR22MB5971.namprd22.prod.outlook.com
Backpatch-through: 13
2025-05-30 12:55:15 -04:00
961553daf5 Make XactLockTableWait() and ConditionalXactLockTableWait() interruptable more.
Previously, XactLockTableWait() and ConditionalXactLockTableWait() could enter
a non-interruptible loop when they successfully acquired a lock on a transaction
but the transaction still appeared to be running. Since this loop continued
until the transaction completed, it could result in long, uninterruptible waits.

Although this scenario is generally unlikely since XactLockTableWait() and
ConditionalXactLockTableWait() can basically acquire a transaction lock
only when the transaction is not running, it can occur in a hot standby.
In such cases, the transaction may still appear active due to
the KnownAssignedXids list, even while no lock on the transaction exists.
For example, this situation can happen when creating a logical replication
slot on a standby.

The cause of the non-interruptible loop was the absence of CHECK_FOR_INTERRUPTS()
within it. This commit adds CHECK_FOR_INTERRUPTS() to the loop in both functions,
ensuring they can be interrupted safely.

Back-patch to all supported branches.

Author: Kevin K Biju <kevinkbiju@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAM45KeELdjhS-rGuvN=ZLJ_asvZACucZ9LZWVzH7bGcD12DDwg@mail.gmail.com
Backpatch-through: 13
2025-05-31 00:08:40 +09:00
c3eda50b06 Change internal queryid type from uint64 to int64
uint64 was perhaps chosen in cff440d36 as the type was uint32 prior to
that widening work.

Having this as uint64 doesn't make much sense and just adds the overhead of
having to remember that we always output this in its signed form.  Let's
remove that overhead.

The signed form output is seemingly required since we have no way to
represent the full range of uint64 in an SQL type.  We use BIGINT in places
like pg_stat_statements, which maps directly to int64.

The release notes "Source Code" section may want to mention this
adjustment as some extensions may wish to adjust their code.

Author: David Rowley <dgrowleyml@gmail.com>
Suggested-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/50cb0c8b-994b-48f9-a1c4-13039eb3536b@eisentraut.org
2025-05-30 22:59:39 +12:00
03c53a7314 doc PG 18 relnotes: modify async I/O item for other improvements
Add "etc." to indicate other actions will also be improved by
asynchronous I/O.

Reported-by: Melanie Plageman

Discussion: https://postgr.es/m/CAAKRu_bqjgSYA+OdemL-X91Yv53OwsVARZy+-tRyj8YQ=kcj0A@mail.gmail.com
2025-05-29 12:37:05 -04:00
470273da0f Avoid resource leaks when a dblink connection fails.
If we hit out-of-memory between creating the PGconn and inserting
it into dblink's hashtable, we'd lose track of the PGconn, which
is quite bad since it represents a live connection to a remote DB.
Fix by rearranging things so that we create the hashtable entry
first.

Also reduce the number of states we have to deal with by getting rid
of the separately-allocated remoteConn object, instead allocating it
in-line in the hashtable entries.  (That incidentally removes a
session-lifespan memory leak observed in the regression tests.)

There is an apparently-irreducible remaining OOM hazard, which
is that if the connection fails at the libpq level (ie it's
CONNECTION_BAD) then we have to pstrdup the PGconn's error message
before we can release it, and theoretically that could fail.  However,
in such cases we're only leaking memory not a live remote connection,
so I'm not convinced that it's worth sweating over.

This is a pretty low-probability failure mode of course, but losing
a live connection seems bad enough to justify back-patching.

Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Discussion: https://postgr.es/m/1346940.1748381911@sss.pgh.pa.us
Backpatch-through: 13
2025-05-29 10:39:55 -04:00
3c4d7557e0 Fix assertion failure in pg_prewarm() on objects without storage.
An assertion test added in commit 049ef33 could fail when pg_prewarm()
was called on objects without storage, such as partitioned tables.
This resulted in the following failure in assert-enabled builds:

    Failed Assert("RelFileNumberIsValid(rlocator.relNumber)")

Note that, in non-assert builds, pg_prewarm() just failed with an error
in that case, so there was no ill effect in practice.

This commit fixes the issue by having pg_prewarm() raise an error early
if the specified object has no storage. This approach is similar to
the fix in commit 4623d7144 for pg_freespacemap.

Back-patched to v17, where the issue was introduced.

Author: Masahiro Ikeda <ikedamsh@oss.nttdata.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/e082e6027610fd0a4091ae6d033aa117@oss.nttdata.com
Backpatch-through: 17
2025-05-29 17:50:32 +09:00
c3623703f3 Add AioUringCompletion in wait_event_names.txt
Oversight in c325a7633f, where the LWLock tranche AioUringCompletion
has been added.

Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/aDT5sBOxJTdulXnE@paquier.xyz
2025-05-29 13:25:05 +09:00
a1de1b0833 doc PG 18 relnotes: split apart log_connections item
Also add details to asynchronous I/O item.

Reported-by: Melanie Plageman

Discussion: https://postgr.es/m/CAAKRu_YsVvyantS0X0Y_-vp_97=yGaoYJMXXyCEkR7pumAH3Jg@mail.gmail.com
2025-05-28 22:43:36 -04:00
35a428f30b pg_stat_statements: Fix parameter number gaps in normalized queries
pg_stat_statements anticipates that certain constant locations may be
recorded multiple times and attempts to avoid calculating a length for
these locations in fill_in_constant_lengths().

However, during generate_normalized_query() where normalized query
strings are generated, these locations are not excluded from
consideration.  This could increment the parameter number counter for
every recorded occurrence at such a location, leading to an incorrect
normalization in certain cases with gaps in the numbers reported.

For example, take this query:
SELECT WHERE '1' IN ('2'::int, '3'::int::text)
Before this commit, it would be normalized like that, with gaps in the
parameter numbers:
SELECT WHERE $1 IN ($3::int, $4::int::text)
However the correct, less confusing one should be like that:
SELECT WHERE $1 IN ($2::int, $3::int::text)

This commit fixes the computation of the parameter numbers to track the
number of constants replaced with an $n by a separate counter instead of
the iterator used to loop through the list of locations.

The underlying query IDs are not changed, neither are the normalized
strings for existing PGSS hash entries.  New entries with fresh
normalized queries would automatically get reshaped based on the new
parameter numbering.

Issue discovered while discussing a separate problem for HEAD, but this
affects all the stable branches.

Author: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/CAA5RZ0tzxvWXsacGyxrixdhy3tTTDfJQqxyFBRFh31nNHBQ5qA@mail.gmail.com
Backpatch-through: 13
2025-05-29 11:26:03 +09:00
089f27cf8a doc: clarify log_connections new "setup_durations" output 2025-05-28 21:42:34 -04:00
bf6034d00d doc PG 18 relnotes: move ANALYZE item,split ANALYZE/EXPLAIN item
Reported-by: Yugo Nagata

Author: Yugo Nagata

Discussion: https://postgr.es/m/20250528232503.7db770f651c2c821c0e3c1df@sraoss.co.jp
2025-05-28 18:43:31 -04:00
e5d64fd654 Tighten parsing of datetime input.
ParseFraction only expects to deal with fields that contain a decimal
point and digit(s).  However it's possible in some edge cases for it
to be passed input that doesn't look like that.  In particular the
input could look like a valid floating-point number, such as ".123e6".
strtod() will happily eat that, possibly producing a result that is
not within the expected range 0..1, which can result in integer
overflow in the callers.  That doesn't have any security consequences,
but it's still not very desirable.  Fix by checking that the input
has the expected form.

Similarly, DecodeNumberField only expects to deal with fields that
contain a decimal point and digit(s), but it's sometimes abused to
parse strings that might not look like that.  This could result in
failure to reject bogus input, yielding silly results.  Again, fix
by rejecting input that doesn't look as-expected.  That decision
also means that we can affirmatively answer the very old comment
questioning whether we couldn't save some duplicative code by
using ParseFractionalSecond here.

While these changes should only reject input that nobody would
consider valid, it still doesn't seem like a change to make in
stable branches.  Apply to HEAD only.

Reported-by: Evgeniy Gorbanev <gorbanev.es@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1328335.1748371099@sss.pgh.pa.us
2025-05-28 15:10:48 -04:00