1
0
mirror of https://github.com/postgres/postgres.git synced 2025-07-03 20:02:46 +03:00

45690 Commits

Author SHA1 Message Date
a604affade Add tab-completion for ALTER TABLE not-nulls
The command is: ALTER TABLE x ADD [CONSTRAINT y] NOT NULL z

This syntax was added in 18, but I got pushback for getting commit
dbf42b84ac in 18 (also tab-completion for new syntax) after the
feature freeze, so I'll put this in master only for now.

Author: Álvaro Herrera <alvherre@kurilemu.de>
Reported-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Discussion: https://postgr.es/m/d4f14c6b-086b-463c-b15f-01c7c9728eab@oss.nttdata.com
Discussion: https://postgr.es/m/202505111448.bwbfomrymq4b@alvherre.pgsql
2025-07-03 16:54:36 +02:00
c84698ceae Remove leftover dead code from commit_ts.h.
Commit 08aa89b326 removed the COMMIT_TS_SETTS WAL record,
leaving xl_commit_ts_set and SizeOfCommitTsSet unused. However,
it missed removing these definitions. This commit cleans up
the leftover code.

Since this is a cleanup rather than a bug fix, it is applied only to
the master branch.

Author: Andy Fan <zhihuifan1213@163.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/87ecuzmkqf.fsf@163.com
2025-07-03 23:39:45 +09:00
647cffd2f3 Prevent creation of duplicate not-null constraints for domains
This was previously harmless, but now that we create pg_constraint rows
for those, duplicates are not welcome anymore.

Backpatch to 18.

Co-authored-by: jian he <jian.universality@gmail.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/CACJufxFSC0mcQ82bSk58sO-WJY4P-o4N6RD2M0D=DD_u_6EzdQ@mail.gmail.com
2025-07-03 11:46:12 +02:00
87251e1149 Fix bogus grammar for a CREATE CONSTRAINT TRIGGER error
If certain constraint characteristic clauses (NO INHERIT, NOT VALID, NOT
ENFORCED) are given to CREATE CONSTRAINT TRIGGER, the resulting error
message is
  ERROR:  TRIGGER constraints cannot be marked NO INHERIT
which is a bit silly, because these aren't "constraints of type
TRIGGER".  Hardcode a better error message to prevent it.  This is a
cosmetic fix for quite a fringe problem with no known complaints from
users, so no backpatch.

While at it, silently accept ENFORCED if given.

Author: Amul Sul <sulamul@gmail.com>
Reviewed-by: jian he <jian.universality@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/CAAJ_b97hd-jMTS7AjgU6TDBCzDx_KyuKxG+K-DtYmOieg+giyQ@mail.gmail.com
Discussion: https://postgr.es/m/CACJufxHSp2puxP=q8ZtUGL1F+heapnzqFBZy5ZNGUjUgwjBqTQ@mail.gmail.com
2025-07-03 11:25:39 +02:00
8ec04c8577 Refactor subtype field of AlterDomainStmt
AlterDomainStmt.subtype used characters for its subtypes of commands,
SET|DROP DEFAULT|NOT NULL and ADD|DROP|VALIDATE CONSTRAINT, which were
hardcoded in a couple of places of the code.  The code is improved by
using an enum instead, with the same character values as the original
code.

Note that the field was documented in parsenodes.h and that it forgot to
mention 'V' (VALIDATE CONSTRAINT).

Author: Quan Zongliang <quanzongliang@yeah.net>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: wenhui qiu <qiuwenhuifx@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Discussion: https://postgr.es/m/41ff310b-16bd-44b9-a3ef-97e20f14b709@yeah.net
2025-07-03 16:34:28 +09:00
bc2f348e87 Support multi-line headers in COPY FROM command.
The COPY FROM command now accepts a non-negative integer for the HEADER option,
allowing multiple header lines to be skipped. This is useful when the input
contains multi-line headers that should be ignored during data import.

Author: Shinya Kato <shinya11.kato@gmail.com>
Co-authored-by: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Yugo Nagata <nagata@sraoss.co.jp>
Discussion: https://postgr.es/m/CAOzEurRPxfzbxqeOPF_AGnAUOYf=Wk0we+1LQomPNUNtyZGBZw@mail.gmail.com
2025-07-03 15:27:26 +09:00
fd7d7b7191 Improve checks for GUC recovery_target_timeline
Currently check_recovery_target_timeline() converts any value that is
not "current", "latest", or a valid integer to 0.  So, for example, the
following configuration added to postgresql.conf followed by a startup:
recovery_target_timeline = 'bogus'
recovery_target_timeline = '9999999999'

...  results in the following error patterns:
FATAL:  22023: recovery target timeline 0 does not exist
FATAL:  22023: recovery target timeline 1410065407 does not exist

This is confusing, because the server does not reflect the intention of
the user, and just reports incorrect data unrelated to the GUC.

The origin of the problem is that we do not perform a range check in the
GUC value passed-in for recovery_target_timeline.  This commit improves
the situation by using strtou64() and by providing stricter range
checks.  Some test cases are added for the cases of an incorrect, an
upper-bound and a lower-bound timeline value, checking the sanity of the
reports based on the contents of the server logs.

Author: David Steele <david@pgmasters.net>
Discussion: https://postgr.es/m/e5d472c7-e9be-4710-8dc4-ebe721b62cea@pgbackrest.org
2025-07-03 11:14:20 +09:00
0da29e4cb1 Enable use of Memoize for ANTI joins
Currently, we do not support Memoize for SEMI and ANTI joins because
nested loop SEMI/ANTI joins do not scan the inner relation to
completion, which prevents Memoize from marking the cache entry as
complete.  One might argue that we could mark the cache entry as
complete after fetching the first inner tuple, but that would not be
safe: if the first inner tuple and the current outer tuple do not
satisfy the join clauses, a second inner tuple matching the parameters
would find the cache entry already marked as complete.

However, if the inner side is provably unique, this issue doesn't
arise, since there would be no second matching tuple.  That said, this
doesn't help in the case of SEMI joins, because a SEMI join with a
provably unique inner side would already have been reduced to an inner
join by reduce_unique_semijoins.

Therefore, in this patch, we check whether the inner relation is
provably unique for ANTI joins and enable the use of Memoize in such
cases.

Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: wenhui qiu <qiuwenhuifx@gmail.com>
Reviewed-by: Andrei Lepikhov <lepihov@gmail.com>
Discussion: https://postgr.es/m/CAMbWs48FdLiMNrmJL-g6mDvoQVt0yNyJAqMkv4e2Pk-5GKCZLA@mail.gmail.com
2025-07-03 10:57:26 +09:00
7b2eb72b1b Add InjectionPointList() to retrieve list of injection points
This routine has come as a useful piece to be able to know the list of
injection points currently attached in a system.  One area would be to
use it in a set-returning function, or just let out-of-core code play
with it.

This hides the internals of the shared memory array lookup holding the
information about the injection points (point name, library and function
name), allocating the result in a palloc'd List consumable by the
caller.

Reviewed-by: Jeff Davis <pgsql@j-davis.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Rahila Syed <rahilasyed90@gmail.com>
Discussion: https://postgr.es/m/Z_xYkA21KyLEHvWR@paquier.xyz
Discussion: https://postgr.es/m/aBG2rPwl3GE7m1-Q@paquier.xyz
2025-07-03 08:41:25 +09:00
fe05430ace Correctly copy the target host identification in PQcancelCreate.
PQcancelCreate failed to copy struct pg_conn_host's "type" field,
instead leaving it zero (a/k/a CHT_HOST_NAME).  This seemingly
has no great ill effects if it should have been CHT_UNIX_SOCKET
instead, but if it should have been CHT_HOST_ADDRESS then a
null-pointer dereference will occur when the cancelConn is used.

Bug: #18974
Reported-by: Maxim Boguk <maxim.boguk@gmail.com>
Author: Sergei Kornilov <sk@zsrv.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/18974-575f02b2168b36b3@postgresql.org
Backpatch-through: 17
2025-07-02 15:48:02 -04:00
0c2b7174c3 Fix cross-version upgrade test breakage from commit fe07100e82.
In commit fe07100e82, I renamed a couple of functions in
test_dsm_registry to make it clear what they are testing.  However,
the buildfarm's cross-version upgrade tests run pg_upgrade with the
test modules installed, so this caused errors like:

    ERROR:  could not find function "get_val_in_shmem" in file ".../test_dsm_registry.so"

To fix, revert those renames.  I could probably get away with only
un-renaming the C symbols, but I figured I'd avoid introducing
function name mismatches.  Also, AFAICT the buildfarm's
cross-version upgrade tests do not run the test module tests
post-upgrade, else we'll need to properly version the extension.

Per buildfarm member crake.

Discussion: https://postgr.es/m/aGVuYUNW23tStUYs%40nathan
2025-07-02 13:26:33 -05:00
bb109382ef Make more use of RELATION_IS_OTHER_TEMP().
A few places were open-coding it instead of using this handy macro.

Author: Junwang Zhao <zhjwpku@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://postgr.es/m/CAEG8a3LjTGJcOcxQx-SUOGoxstG4XuCWLH0ATJKKt_aBTE5K8w%40mail.gmail.com
2025-07-02 12:32:19 -05:00
fe07100e82 Add GetNamedDSA() and GetNamedDSHash().
Presently, the dynamic shared memory (DSM) registry only provides
GetNamedDSMSegment(), which allocates a fixed-size segment.  To use
the DSM registry for more sophisticated things like dynamic shared
memory areas (DSAs) or a hash table backed by a DSA (dshash), users
need to create a DSM segment that stores various handles and LWLock
tranche IDs and to write fairly complicated initialization code.
Furthermore, there is likely little variation in this
initialization code between libraries.

This commit introduces functions that simplify allocating a DSA or
dshash within the DSM registry.  These functions are very similar
to GetNamedDSMSegment().  Notable differences include the lack of
an initialization callback parameter and the prohibition of calling
the functions more than once for a given entry in each backend
(which should be trivially avoidable in most circumstances).  While
at it, this commit bumps the maximum DSM registry entry name length
from 63 bytes to 127 bytes.

Also note that even though one could presumably detach/destroy the
DSAs and dshashes created in the registry, such use-cases are not
yet well-supported, if for no other reason than the associated DSM
registry entries cannot be removed.  Adding such support is left as
a future exercise.

The test_dsm_registry test module contains tests for the new
functions and also serves as a complete usage example.

Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Florents Tselai <florents.tselai@gmail.com>
Reviewed-by: Rahila Syed <rahilasyed90@gmail.com>
Discussion: https://postgr.es/m/aEC8HGy2tRQjZg_8%40nathan
2025-07-02 11:50:52 -05:00
9ca30a0b04 Update obsolete row compare preprocessing comments.
Restore nbtree preprocessing comments describing how we mark nbtree row
compare members required to how they were prior to 2016 bugfix commit
a298a1e0.

Oversight in commit bd3f59fd, which made nbtree preprocessing revert to
the original 2006 rules, but neglected to revert these comments.

Backpatch-through: 18
2025-07-02 12:36:35 -04:00
7374b3a536 Allow width_bucket()'s "operand" input to be NaN.
The array-based variant of width_bucket() has always accepted NaN
inputs, treating them as equal but larger than any non-NaN,
as we do in ordinary comparisons.  But up to now, the four-argument
variants threw errors for a NaN operand.  This is inconsistent
and unnecessary, since we can perfectly well regard NaN as falling
after the last bucket.

We do still throw error for NaN or infinity histogram-bound inputs,
since there's no way to compute sensible bucket boundaries.

Arguably this is a bug fix, but given the lack of field complaints
I'm content to fix it in master.

Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/2822872.1750540911@sss.pgh.pa.us
2025-07-02 11:34:40 -04:00
c989affb52 Fix error message for ALTER CONSTRAINT ... NOT VALID
Trying to alter a constraint so that it becomes NOT VALID results in an
error that assumes the constraint is a foreign key.  This is potentially
wrong, so give a more generic error message.

While at it, give CREATE CONSTRAINT TRIGGER a better error message as
well.

Co-authored-by: jian he <jian.universality@gmail.com>
Co-authored-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Co-authored-by: Amul Sul <sulamul@gmail.com>
Discussion: https://postgr.es/m/CACJufxHSp2puxP=q8ZtUGL1F+heapnzqFBZy5ZNGUjUgwjBqTQ@mail.gmail.com
2025-07-02 17:02:27 +02:00
bd3f59fdb7 Make row compares robust during nbtree array scans.
Recent nbtree bugfix commit 5f4d98d4 added a special case to the code
that sets up a page-level prefix of keys that are definitely satisfied
by every tuple on the page: whenever _bt_set_startikey reached a row
compare key, we'd refuse to apply the pstate.forcenonrequired behavior
in scans where that usually happens (scans with a higher-order array
key).  That hack made the scan avoid essentially the same infinite
cycling behavior that also affected nbtree scans with redundant keys
(keys that preprocessing could not eliminate) prior to commit f09816a0.
There are now serious doubts about this row compare workaround.

Testing has shown that a scan with a row compare key and an array key
could still read the same leaf page twice (without the scan's direction
changing), which isn't supposed to be possible following the SAOP
enhancements added by Postgres 17 commit 5bf748b8.  Also, we still
allowed a required row compare key to be used with forcenonrequired mode
when its header key happened to be beyond the pstate.ikey set by
_bt_set_startikey, which was complicated and brittle.

The underlying problem was that row compares had inconsistent rules
around how scans start (which keys can be used for initial positioning
purposes) and how scans end (which keys can set continuescan=false).
Quals with redundant keys that could not be eliminated by preprocessing
also had that same quality to them prior to today's bugfix f09816a0.  It
now seems prudent to bring row compare keys in line with the new charter
for required keys, by making the start and end rules symmetric.

This commit fixes two points of disagreement between _bt_first and
_bt_check_rowcompare.  Firstly, _bt_check_rowcompare was capable of
ending the scan at the point where it needed to compare an ISNULL-marked
row compare member that came immediately after a required row compare
member.  _bt_first now has symmetric handling for NULL row compares.
Secondly, _bt_first had its own ideas about which keys were safe to use
for initial positioning purposes.  It could use fewer or more keys than
_bt_check_rowcompare.  _bt_first now uses the same requiredness markings
as _bt_check_rowcompare for this.

Now that _bt_first and _bt_check_rowcompare agree on how to start and
end scans, we can get rid of the forcenonrequired special case, without
any risk of infinite cycling.  This approach also makes row compare keys
behave more like regular scalar keys, particularly within _bt_first.

Fixing these inconsistencies necessitates dealing with a related issue
with the way that row compares were marked required by preprocessing: we
didn't mark any lower-order row members required following 2016 bugfix
commit a298a1e0.  That approach was over broad.  The bug in question was
actually an oversight in how _bt_check_rowcompare dealt with tuple NULL
values that failed to satisfy a scan key marked required in the opposite
scan direction (it was a bug in 2011 commits 6980f817 and 882368e8, not
a bug in 2006 commit 3a0a16cb).  Go back to marking row compare members
as required using the original 2006 rules, and fix the 2016 bug in a
more principled way: by limiting use of the "set continuescan=false with
a key required in the opposite scan direction upon encountering a NULL
tuple value" optimization to the first/most significant row member key.
While it isn't safe to use an implied IS NOT NULL qualifier to end the
scan when it comes from a required lower-order row compare member key,
it _is_ generally safe for such a required member key to end the scan --
provided the key is marked required in the _current_ scan direction.

This fixes what was arguably an oversight in either commit 5f4d98d4 or
commit 8a510275.  It is a direct follow-up to today's commit f09816a0.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Heikki Linnakangas <heikki.linnakangas@iki.fi>
Discussion: https://postgr.es/m/CAH2-Wz=pcijHL_mA0_TJ5LiTB28QpQ0cGtT-ccFV=KzuunNDDQ@mail.gmail.com
Backpatch-through: 18
2025-07-02 09:48:15 -04:00
f09816a0a7 Make handling of redundant nbtree keys more robust.
nbtree preprocessing's handling of redundant (and contradictory) keys
created problems for scans with = arrays.  It was just about possible
for a scan with an = array key and one or more redundant keys (keys that
preprocessing could not eliminate due an incomplete opfamily and a
cross-type key) to get stuck.  Testing has shown that infinite cycling
where the scan never manages to make forward progress was possible.
This could happen when the scan's arrays were reset in _bt_readpage's
forcenonrequired=true path (added by bugfix commit 5f4d98d4) when the
arrays weren't at least advanced up to the same point that they were in
at the start of the _bt_readpage call.  Earlier redundant keys prevented
the finaltup call to _bt_advance_array_keys from reaching lower-order
keys that needed to be used to sufficiently advance the scan's arrays.

To fix, make preprocessing leave the scan's keys in a state that is as
close as possible to how it'll usually leave them (in the common case
where there's no redundant keys that preprocessing failed to eliminate).
Now nbtree preprocessing _reliably_ leaves behind at most one required
>/>= key per index column, and at most one required </<= key per index
column.  Columns that have one or more = keys that are eligible to be
marked required (based on the traditional rules) prioritize the = keys
over redundant inequality keys; they'll _reliably_ be left with only one
of the = keys as the index column's only required key.

Keys that are not marked required (whether due to the new preprocessing
step running or for some other reason) are relocated to the end of the
so->keyData[] array as needed.  That way they'll always be evaluated
after the scan's required keys, and so cannot prevent code in places
like _bt_advance_array_keys and _bt_first from reaching a required key.

Also teach _bt_first to decide which initial positioning keys to use
based on the same requiredness markings that have long been used by
_bt_checkkeys/_bt_advance_array_keys.  This is a necessary condition for
reliably avoiding infinite cycling.  _bt_advance_array_keys expects to
be able to reason about what'll happen in the next _bt_first call should
it start another primitive index scan, by evaluating inequality keys
that were marked required in the opposite-to-scan scan direction only.
Now everybody (_bt_first, _bt_checkkeys, and _bt_advance_array_keys)
will always agree on which exact key will be used on each index column
to start and/or end the scan (except when row compare keys are involved,
which have similar problems not addressed by this commit).

An upcoming commit will finish off the work started by this commit by
harmonizing how _bt_first, _bt_checkkeys, and _bt_advance_array_keys
apply row compare keys to start and end scans.

This fixes what was arguably an oversight in either commit 5f4d98d4 or
commit 8a510275.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Heikki Linnakangas <heikki.linnakangas@iki.fi>
Discussion: https://postgr.es/m/CAH2-Wz=ds4M+3NXMgwxYxqU8MULaLf696_v5g=9WNmWL2=Uo2A@mail.gmail.com
Backpatch-through: 18
2025-07-02 09:40:49 -04:00
f039c22441 meson: Increase minimum version to 0.57.2
The previous minimum was to maintain support for Python 3.5, but we
now require Python 3.6 anyway (commit 45363fca63), so that reason is
obsolete.  A small raise to Meson 0.57 allows getting rid of a fair
amount of version conditionals and silences some future-deprecated
warnings.

With the version bump, the following deprecation warnings appeared and
are fixed:

WARNING: Project targets '>=0.57' but uses feature deprecated since '0.55.0': ExternalProgram.path. use ExternalProgram.full_path() instead
WARNING: Project targets '>=0.57' but uses feature deprecated since '0.56.0': meson.build_root. use meson.project_build_root() or meson.global_build_root() instead.

It turns out that meson 0.57.0 and 0.57.1 are buggy for our use, so
the minimum is actually set to 0.57.2.  This is specific to this
version series; in the future we won't necessarily need to be this
precise.

Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/42e13eb0-862a-441e-8d84-4f0fd5f6def0%40eisentraut.org
2025-07-02 11:14:53 +02:00
de5aa15209 Reformat some node comments
Use per-field comments for IndexInfo, instead of one big header
comment listing all the fields.  This makes the relevant comments
easier to find, and it will also make it less likely that comments are
not updated when fields are added or removed, as has happened in the
past.

Author: Japin Li <japinli@hotmail.com>
Discussion: https://www.postgresql.org/message-id/flat/ME0P300MB04453E6C7EA635F0ECF41BFCB6832%40ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM
2025-07-02 09:50:51 +02:00
3811ca3600 Fix missing FSM vacuum opportunities on tables without indexes.
Commit c120550edb optimized the vacuuming of relations without
indexes (a.k.a. one-pass strategy) by directly marking dead item IDs
as LP_UNUSED. However, the periodic FSM vacuum was still checking if
dead item IDs had been marked as LP_DEAD when attempting to vacuum the
FSM every VACUUM_FSM_EVERY_PAGES blocks. This condition was never met
due to the optimization, resulting in missed FSM vacuum
opportunities.

This commit modifies the periodic FSM vacuum condition to use the
number of tuples deleted during HOT pruning. This count includes items
marked as either LP_UNUSED or LP_REDIRECT, both of which are expected
to result in new free space to report.

Back-patch to v17 where the vacuum optimization for tables with no
indexes was introduced.

Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/CAD21AoBL8m6B9GSzQfYxVaEgvD7-Kr3AJaS-hJPHC+avm-29zw@mail.gmail.com
Backpatch-through: 17
2025-07-01 23:25:20 -07:00
9adb58a3cc Remove implicit cast from 'void *'
Commit e2809e3a10 added code to a header which assigns a pointer
to void to a pointer to unsigned char. This causes build errors for
extensions written in C++. Fix by adding an explicit cast.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CANWCAZaCq9AHBuhs%3DMx7Gg_0Af9oRU7iAqr0itJCtfmsWwVmnQ%40mail.gmail.com
Backpatch-through: 18
2025-07-02 11:51:10 +07:00
3369a3b49b Fix bug in archive streamer with LZ4 decompression
When decompressing some input data, the calculation for the initial
starting point and the initial size were incorrect, potentially leading
to failures when decompressing contents with LZ4.  These initialization
points are fixed in this commit, bringing the logic closer to what
exists for gzip and zstd.

The contents of the compressed data is clear (for example backups taken
with LZ4 can still be decompressed with a "lz4" command), only the
decompression part reading the input data was impacted by this issue.

This code path impacts pg_basebackup and pg_verifybackup, which can use
the LZ4 decompression routines with an archive streamer, or any tools
that try to use the archive streamers in src/fe_utils/.

The issue is easier to reproduce with files that have a low-compression
rate, like ones filled with random data, for a size of at least 512kB,
but this could happen with anything as long as it is stored in a data
folder.  Some tests are added based on this idea, with a file filled
with random bytes grabbed from the backend, written at the root of the
data folder.  This is proving good enough to reproduce the original
problem.

Author: Mikhail Gribkov <youzhick@gmail.com>
Discussion: https://postgr.es/m/CAMEv5_uQS1Hg6KCaEP2JkrTBbZ-nXQhxomWrhYQvbdzR-zy-wA@mail.gmail.com
Backpatch-through: 15
2025-07-02 13:48:36 +09:00
b45242fd30 Move code for the bytea data type from varlena.c to new bytea.c
This commit moves all the routines related to the bytea data type into
its own new file, called bytea.c, clearing some of the bloat in
varlena.c.  This includes the routines for:
- Input, output, receive and send
- Comparison
- Casts to integer types
- bytea-specific functions

The internals of the routines moved here are unchanged, with one
exception.  This comes with a twist in bytea_string_agg_transfn(), where
the call to makeStringAggState() is replaced by the internals of this
routine, still located in varlena.c.  This simplifies the move to the
new file by not having to expose makeStringAggState().

Author: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/CAJ7c6TMPVPJ5DL447zDz5ydctB8OmuviURtSwd=PHCRFEPDEAQ@mail.gmail.com
2025-07-02 09:52:21 +09:00
bee23ea4dd Show sizes of FETCH queries as constants in pg_stat_statements
Prior to this patch, every FETCH call would generate a unique queryId
with a different size specified.  Depending on the workloads, this could
lead to a significant bloat in pg_stat_statements, as repeatedly calling
a specific cursor would result in a new queryId each time.  For example,
FETCH 1 c1; and FETCH 2 c1; would produce different queryIds.

This patch improves the situation by normalizing the fetch size, so as
semantically similar statements generate the same queryId.  As a result,
statements like the below, which differ syntactically but have the same
effect, will now share a single queryId:
FETCH FROM c1
FETCH NEXT c1
FETCH 1 c1

In order to do a normalization based on the keyword used in FETCH,
FetchStmt is tweaked with a new FetchDirectionKeywords.  This matters
for "howMany", which could be set to a negative value depending on the
direction, and we want to normalize the queries with enough information
about the direction keywords provided, including RELATIVE, ABSOLUTE or
all the ALL variants.

Author: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/CAA5RZ0tA6LbHCg2qSS+KuM850BZC_+ZgHV7Ug6BXw22TNyF+MA@mail.gmail.com
2025-07-02 08:39:25 +09:00
184595836b Update comment for IndexInfo.ii_NullsNotDistinct
Commit 7a7b3e11e6 added the ii_NullsNotDistinct field, but the
comment was not updated.

Author: Japin Li <japinli@hotmail.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/ME0P300MB04453E6C7EA635F0ECF41BFCB6832%40ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM
2025-07-01 23:12:24 +02:00
32bcf568cb Make more use of binaryheap_empty() and binaryheap_size().
A few places were accessing bh_size directly instead of via these
handy macros.

Author: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://postgr.es/m/CAJ7c6TPQMVL%2B028T4zuw9ZqL5Du9JavOLhBQLkJeK0RznYx_6w%40mail.gmail.com
2025-07-01 14:19:07 -05:00
7a7b3e11e6 Update comment for IndexInfo.ii_WithoutOverlaps
Commit fc0438b4e8 added the ii_WithoutOverlaps field, but the comment
was not updated.

Author: Japin Li <japinli@hotmail.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/ME0P300MB04453E6C7EA635F0ECF41BFCB6832%40ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM
2025-07-01 20:37:24 +02:00
9e5fee8228 Fix outdated comment for IndexInfo
Commit 7841623571 removed the ii_OpclassOptions field, but the
comment was not updated.

Author: Japin Li <japinli@hotmail.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/ME0P300MB04453E6C7EA635F0ECF41BFCB6832%40ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM
2025-07-01 20:12:36 +02:00
fff0d1edf5 Improve code comment
The previous wording was potentially confusing about the impact of the
OVERRIDING clause on generated columns.  Reword slightly to avoid
that.

Reported-by: jian he <jian.universality@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CACJufxFMBe0nPXOQZMLTH4Ry5Gyj4m%2B2Z05mRi9KB4hk8rGt9w%40mail.gmail.com
2025-07-01 18:42:07 +02:00
1fd772d192 Make sure IOV_MAX is defined.
We stopped defining IOV_MAX on non-Windows systems in 75357ab94, on
the assumption that every non-Windows system defines it in <limits.h>
as required by X/Open.  GNU Hurd, however, doesn't follow that
standard either.  Put back the old logic to assume 16 if it's
not defined.

Author: Michael Banck <mbanck@gmx.net>
Co-authored-by: Christoph Berg <myon@debian.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/6862e8d1.050a0220.194b8d.76fa@mx.google.com
Discussion: https://postgr.es/m/6846e0c3.df0a0220.39ef9b.c60e@mx.google.com
Backpatch-through: 16
2025-07-01 12:40:35 -04:00
29213636e6 Make safeguard against incorrect flags for fsync more portable.
The existing code assumed that O_RDONLY is defined as 0, but this is
not required by POSIX and is not true on GNU Hurd.  We can avoid
the assumption by relying on O_ACCMODE to mask the fcntl() result.
(Hopefully, all supported platforms define that.)

Author: Michael Banck <mbanck@gmx.net>
Co-authored-by: Samuel Thibault
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/6862e8d1.050a0220.194b8d.76fa@mx.google.com
Discussion: https://postgr.es/m/68480868.5d0a0220.1e214d.68a6@mx.google.com
Backpatch-through: 13
2025-07-01 12:08:20 -04:00
8af0d0ab01 Remove provider field from pg_locale_t.
The behavior of pg_locale_t is specified by methods, so a separate
provider field is no longer necessary.

Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/2830211e1b6e6a2e26d845780b03e125281ea17b.camel%40j-davis.com
2025-07-01 07:50:46 -07:00
5a38104b36 Control ctype behavior internally with a method table.
Previously, pattern matching and case mapping behavior branched based
on the provider. Refactor to use a method table, which is less
error-prone.

This is also a step toward multiple provider versions, which we may
want to support in the future.

Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/2830211e1b6e6a2e26d845780b03e125281ea17b.camel%40j-davis.com
2025-07-01 07:44:47 -07:00
d81dcc8d62 Use pg_ascii_tolower()/pg_ascii_toupper() where appropriate.
Avoids unnecessary dependence on setlocale(). No behavior change.

This commit reverts e1458f2f1b, which reverted some changes
unintentionally committed before the branch for 19.

Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/a8666c391dfcabe79868d95f7160eac533ace718.camel@j-davis.com
Discussion: https://postgr.es/m/7efaaa645aa5df3771bb47b9c35df27e08f3520e.camel@j-davis.com
2025-07-01 07:24:23 -07:00
9e345415bc Fix indentation in pg_numa code
Broken by commits 7fe2f67c7c, 81f287dc92 and bf1119d74a. Backpatch
to 18, same as the offending commits.

Backpatch-through: 18
2025-07-01 15:23:07 +02:00
bf1119d74a Add CHECK_FOR_INTERRUPTS into pg_numa_query_pages
Querying the NUMA status can be quite time consuming, especially with
large shared buffers. 8cc139bec3 called numa_move_pages() once, for
all buffers, and we had to wait for the syscall to complete.

But with the chunking, introduced by 7fe2f67c7c to work around a kernel
bug, we can do CHECK_FOR_INTERRUPTS() after each chunk, allowing users
to abort the execution.

Reviewed-by: Christoph Berg <myon@debian.org>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/aEtDozLmtZddARdB@msg.df7cb.de
Backpatch-through: 18
2025-07-01 12:58:35 +02:00
81f287dc92 Silence valgrind about pg_numa_touch_mem_if_required
When querying NUMA status of pages in shared memory, we need to touch
the memory first to get valid results. This may trigger valgrind
reports, because some of the memory (e.g. unpinned buffers) may be
marked as noaccess.

Solved by adding a valgrind suppresion. An alternative would be to
adjust the access/noaccess status before touching the memory, but that
seems far too invasive. It would require all those places to have
detailed knowledge of what the shared memory stores.

The pg_numa_touch_mem_if_required() macro is replaced with a function.
Macros are invisible to suppressions, so it'd have to suppress reports
for the caller - e.g. pg_get_shmem_allocations_numa(). So we'd suppress
reports for the whole function, and that seems to heavy-handed. It might
easily hide other valid issues.

Reviewed-by: Christoph Berg <myon@debian.org>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/aEtDozLmtZddARdB@msg.df7cb.de
Backpatch-through: 18
2025-07-01 12:32:23 +02:00
953050236a amcheck: Improve confusing message
The way it was worded, the %u placeholder could be read as the table
OID.  Rearrange slightly to avoid the possible confusion.

Reported-by: jian he <jian.universality@gmail.com>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CACJufxFx-25XQV%2Br23oku7ZnL958P30hyb9cFeYPv6wv7yzCCw%40mail.gmail.com
2025-07-01 12:24:17 +02:00
7fe2f67c7c Limit the size of numa_move_pages requests
There's a kernel bug in do_pages_stat(), affecting systems combining
64-bit kernel and 32-bit user space. The function splits the request
into chunks of 16 pointers, but forgets the pointers are 32-bit when
advancing to the next chunk. Some of the pointers get skipped, and
memory after the array is interpreted as pointers. The result is that
the produced status of memory pages is mostly bogus.

Systems combining 64-bit and 32-bit environments like this might seem
rare, but that's not the case - all 32-bit Debian packages are built in
a 32-bit chroot on a system with a 64-bit kernel.

This is a long-standing kernel bug (since 2010), affecting pretty much
all kernels, so it'll take time until all systems get a fixed kernel.
Luckily, we can work around the issue by chunking the requests the same
way do_pages_stat() does, at least on affected systems. We don't know
what kernel a 32-bit build will run on, so all 32-bit builds use chunks
of 16 elements (the largest chunk before hitting the issue).

64-bit builds are not affected by this issue, and so could work without
the chunking. But chunking has other advantages, so we apply chunking
even for 64-bit builds, with chunks of 1024 elements.

Reported-by: Christoph Berg <myon@debian.org>
Author: Christoph Berg <myon@debian.org>
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/aEtDozLmtZddARdB@msg.df7cb.de
Context: https://marc.info/?l=linux-mm&m=175077821909222&w=2
Backpatch-through: 18
2025-07-01 12:02:31 +02:00
b5cd0ecd4d Fix typo in pg_publication.h.
Author: shveta malik <shveta.malik@gmail.com>
Discussion: https://postgr.es/m/CAJpy0uAyFN9o7vU_ZkZFv5-6ysXDNKNx_fC0gwLLKg=8==E3ow@mail.gmail.com
2025-07-01 15:17:03 +05:30
8fd9bb1d96 Enable MSVC conforming preprocessor
Switch MSVC to use the conforming preprocessor, using the
/Zc:preprocessor option.

This allows us to drop the alternative implementation of
VA_ARGS_NARGS() for the previous "traditional" preprocessor.

This also prepares the way for enabling C11 mode in the future, which
enables the conforming preprocessor by default.

This now requires Visual Studio 2019.  The installation documentation
is adjusted accordingly.

Discussion: https://www.postgresql.org/message-id/flat/01a69441-af54-4822-891b-ca28e05b215a%40eisentraut.org
2025-07-01 09:41:40 +02:00
c67989789c Fix typos in comments
Commit 19d8e2308b added enum values with the prefix TU_, but a few
comments still referred to TUUI_, which was used in development
versions of the patches committed as 19d8e2308b.

Author: Yugo Nagata <nagata@sraoss.co.jp>
Discussion: https://postgr.es/m/20250701110216.8ac8a9e4c6f607f1d954f44a@sraoss.co.jp
Backpatch-through: 16
2025-07-01 13:13:48 +09:00
a3df0d43d9 Fix typo in system_views.sql's definition of pg_stat_activity
backend_xmin used a lower-character 's' instead of the upper-character
'S' like the other attributes.  This is harmless, but let's be
consistent.

Issue introduced in dd1a3bccca.

Author: Daisuke Higuchi <higuchi.daisuke11@gmail.com>
Discussion: https://postgr.es/m/CAEVT6c8M39cqWje-df39wWr0KWcDgGKd5fMvQo84zvCXKoEL9Q@mail.gmail.com
2025-07-01 09:41:42 +09:00
2e94721747 Improve error handling of libxml2 calls in xml.c
This commit fixes some defects in the backend's xml.c, found upon
inspection of the internals of libxml2:
- xmlEncodeSpecialChars() can fail on malloc(), returning NULL back to
the caller.  xmltext() assumed that this could never happen.  Like other
code paths, a TRY/CATCH block is added there, covering also the fact
that cstring_to_text_with_len() could fail a memory allocation, where
the backend would miss to free the buffer allocated by
xmlEncodeSpecialChars().
- Some libxml2 routines called in xmlelement() can return NULL, like
xmlAddChildList() or xmlTextWriterStartElement().  Dedicated errors are
added for them.
- xml_xmlnodetoxmltype() missed that xmlXPathCastNodeToString() can fail
on an allocation failure.  In this case, the call can just be moved to
the existing TRY/CATCH block.

All these code paths would cause the server to crash.  As this is
unlikely a problem in practice, no backpatch is done.  Jim and I have
caught these defects, not sure who has scored the most.  The contrib
module xml2/ has similar defects, which will be addressed in a separate
change.

Reported-by: Jim Jones <jim.jones@uni-muenster.de>
Reviewed-by: Jim Jones <jim.jones@uni-muenster.de>
Discussion: https://postgr.es/m/aEEingzOta_S_Nu7@paquier.xyz
2025-07-01 08:57:05 +09:00
0836683a89 Improve error report for PL/pgSQL reserved word used as a field name.
The current code in resolve_column_ref (dating to commits 01f7d2990
and fe24d7816) believes that not finding a RECFIELD datum is a
can't-happen case, in consequence of which I didn't spend a whole lot
of time considering what to do if it did happen.  But it turns out
that it *can* happen if the would-be field name is a fully-reserved
PL/pgSQL keyword.  Change the error message to describe that
situation, and add a test case demonstrating it.

This might need further refinement if anyone can find other ways to
trigger a failure here; but without an example it's not clear what
other error to throw.

Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://postgr.es/m/2185258.1745617445@sss.pgh.pa.us
2025-06-30 17:06:39 -04:00
999f172ded De-reserve keywords EXECUTE and STRICT in PL/pgSQL.
On close inspection, there does not seem to be a strong reason
why these should be fully-reserved keywords.  I guess they just
escaped consideration in previous attempts to minimize PL/pgSQL's
list of reserved words.

Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://postgr.es/m/2185258.1745617445@sss.pgh.pa.us
2025-06-30 16:59:36 -04:00
bd09f024a1 Add new OID alias type regdatabase.
This provides a convenient way to look up a database's OID.  For
example, the query

    SELECT * FROM pg_shdepend
    WHERE dbid = (SELECT oid FROM pg_database
                  WHERE datname = current_database());

can now be simplified to

    SELECT * FROM pg_shdepend
    WHERE dbid = current_database()::regdatabase;

Like the regrole type, regdatabase has cluster-wide scope, so we
disallow regdatabase constants from appearing in stored
expressions.

Bumps catversion.

Author: Ian Lawrence Barwick <barwick@gmail.com>
Reviewed-by: Greg Sabino Mullane <htamfids@gmail.com>
Reviewed-by: Jian He <jian.universality@gmail.com>
Reviewed-by: Fabrízio de Royes Mello <fabriziomello@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/aBpjJhyHpM2LYcG0%40nathan
2025-06-30 15:38:54 -05:00
f20a347e1a aio: Fix reference to outdated name
Reported-by: Antonin Houska <ah@cybertec.at>
Author: Antonin Houska <ah@cybertec.at>
Discussion: https://postgr.es/m/5250.1751266701@localhost
Backpatch-through: 18, where da7226993f introduced this
2025-06-30 10:22:02 -04:00
c3e28e9fd9 Avoid uninitialized value error in TAP tests' Cluster->psql
If the method is called in scalar context and we didn't pass in a stderr
handle, one won't be created. However, some error paths assume that it
exists, so in this case create a dummy stderr to avoid the resulting
perl error.

Per gripe from Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru> and
adapted from his patch.

Discussion: https://postgr.es/m/378eac5de4b8ecb5be7bcdf2db9d2c4d@postgrespro.ru
2025-06-30 09:49:50 -04:00