1
0
mirror of https://github.com/postgres/postgres.git synced 2025-06-08 22:02:03 +03:00

57486 Commits

Author SHA1 Message Date
Tom Lane
f7a08b6e96 Avoid integer overflow while testing wal_skip_threshold condition.
smgrDoPendingSyncs had two distinct risks of integer overflow while
deciding which way to ensure durability of a newly-created relation.
First, it accumulated the total size of all forks in a variable of
type BlockNumber (uint32).  While we restrict an individual fork's
size to fit in that, I don't believe there's such a restriction on
all of them added together.  Second, it proceeded to multiply the
sum by BLCKSZ, which most certainly could overflow a uint32.

(The exact expression is total_blocks * BLCKSZ / 1024.  The
compiler might choose to optimize that to total_blocks * 8,
which is not at quite as much risk of overflow as a literal
reading would be, but it's still wrong.)

If an overflow did occur it could lead to a poor choice to
shove a very large relation into WAL instead of fsync'ing it.
This wouldn't be fatal, but it could be inefficient.

Change total_blocks to uint64 which should be plenty, and
rearrange the comparison calculation to be overflow-safe.

I noticed this while looking for ramifications of the proposed
change in MAX_KILOBYTES.  It's not entirely clear to me why
wal_skip_threshold is limited to MAX_KILOBYTES in the
first place, but in any case this code is unsafe regardless
of the range of wal_skip_threshold.

Oversight in c6b92041d which introduced wal_skip_threshold,
so back-patch to v13.

Discussion: https://postgr.es/m/1a01f0-66ec2d80-3b-68487680@27595217
Backpatch-through: 13
2025-01-30 15:36:07 -05:00
Tom Lane
6655d931c6 Handle default NULL insertion a little better.
If a column is omitted in an INSERT, and there's no column default,
the code in preptlist.c generates a NULL Const to be inserted.
Furthermore, if the column is of a domain type, we wrap the Const
in CoerceToDomain, so as to throw a run-time error if the domain
has a NOT NULL constraint.  That's fine as far as it goes, but
there are two problems:

1. We're being sloppy about the type/typmod that the Const is
labeled with.  It really should have the domain's base type/typmod,
since it's the input to CoerceToDomain not the output.  This can
result in coerce_to_domain inserting a useless length-coercion
function (useless because it's being applied to a null).  The
coercion would typically get const-folded away later, but it'd
be better not to create it in the first place.

2. We're not applying expression preprocessing (specifically,
eval_const_expressions) to the resulting expression tree.
The planner's primary expression-preprocessing pass already happened,
so that means the length coercion step and CoerceToDomain node miss
preprocessing altogether.

This is at the least inefficient, since it means the length coercion
and CoerceToDomain will actually be executed for each inserted row,
though they could be const-folded away in most cases.  Worse, it
seems possible that missing preprocessing for the length coercion
could result in an invalid plan (for example, due to failing to
perform default-function-argument insertion).  I'm not aware of
any live bug of that sort with core datatypes, and it might be
unreachable for extension types as well because of restrictions of
CREATE CAST, but I'm not entirely convinced that it's unreachable.
Hence, it seems worth back-patching the fix (although I only went
back to v14, as the patch doesn't apply cleanly at all in v13).

There are several places in the rewriter that are building null
domain constants the same way as preptlist.c.  While those are
before the planner and hence don't have any reachable bug, they're
still applying a length coercion that will be const-folded away
later, uselessly wasting cycles.  Hence, make a utility routine
that all of these places can call to do it right.

Making this code more careful about the typmod assigned to the
generated NULL constant has visible but cosmetic effects on some
of the plans shown in contrib/postgres_fdw's regression tests.

Discussion: https://postgr.es/m/1865579.1738113656@sss.pgh.pa.us
Backpatch-through: 14
2025-01-29 15:31:55 -05:00
Tom Lane
998c4fc7c2 Avoid breaking SJIS encoding while de-backslashing Windows paths.
When running on Windows, canonicalize_path() converts '\' to '/'
to prevent confusing the Windows command processor.  It was
doing that in a non-encoding-aware fashion; but in SJIS there
are valid two-byte characters whose second byte matches '\'.
So encoding corruption ensues if such a character is used in
the path.

We can fairly easily fix this if we know which encoding is
in use, but a lot of our utilities don't have much of a clue
about that.  After some discussion we decided we'd settle for
fixing this only in psql, and assuming that its value of
client_encoding matches what the user is typing.

It seems hopeless to get the server to deal with the problematic
characters in database path names, so we'll just declare that
case to be unsupported.  That means nothing need be done in
the server, nor in utility programs whose only contact with
file path names is for database paths.  But psql frequently
deals with client-side file paths, so it'd be good if it
didn't mess those up.

Bug: #18735
Reported-by: Koichi Suzuki <koichi.suzuki@enterprisedb.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Koichi Suzuki <koichi.suzuki@enterprisedb.com>
Discussion: https://postgr.es/m/18735-4acdb3998bb9f2b1@postgresql.org
Backpatch-through: 13
2025-01-29 14:24:36 -05:00
Noah Misch
9311fcb864 At update of non-LP_NORMAL TID, fail instead of corrupting page header.
The right mix of DDL and VACUUM could corrupt a catalog page header such
that PageIsVerified() durably fails, requiring a restore from backup.
This affects only catalogs that both have a syscache and have DDL code
that uses syscache tuples to construct updates.  One of the test
permutations shows a variant not yet fixed.

This makes !TransactionIdIsValid(TM_FailureData.xmax) possible with
TM_Deleted.  I think core and PGXN are indifferent to that.

Per bug #17821 from Alexander Lakhin.  Back-patch to v13 (all supported
versions).  The test case is v17+, since it uses INJECTION_POINT.

Discussion: https://postgr.es/m/17821-dd8c334263399284@postgresql.org
2025-01-25 11:28:18 -08:00
Noah Misch
3d57f7cf30 Test ECPG decadd(), decdiv(), decmul(), and decsub() for risnull() input.
Since commit 757fb0e5a9a61ac8d3a67e334faeea6dc0084b3f, these
Informix-compat functions return 0 without changing the output
parameter.  Initialize the output parameter before the test call, making
that obvious.  Before this, the expected test output has been depending
on freed stack memory.  "gcc -ftrivial-auto-var-init=pattern" revealed
that.  Back-patch to v13 (all supported versions).

Discussion: https://postgr.es/m/20250106192748.cf.nmisch@google.com
2025-01-25 11:28:18 -08:00
Tom Lane
53771e44df Doc: recommend "psql -X" for restoring pg_dump scripts.
This practice avoids possible problems caused by non-default psql
options, such as disabling AUTOCOMMIT.

Author: Shinya Kato <Shinya11.Kato@oss.nttdata.com>
Reviewed-by: Robert Treat <rob@xzilla.net>
Discussion: https://postgr.es/m/96ff23a5d858ff72ca8e823a014d16fe@oss.nttdata.com
Backpatch-through: 13
2025-01-25 12:42:05 -05:00
Tomas Vondra
92598f4fa5 Use the correct sizeof() in BufFileLoadBuffer
The sizeof() call should reference buffer.data, because that's the
buffer we're reading data into, not the whole PGAlignedBuffer union.
This was introduced by 44cac93464, which replaced the simple buffer
with a PGAlignedBuffer field.

It's benign, because the buffer is the largest field of the union, so
the sizes are the same. But it's easy to trip over this in a patch, so
fix and backpatch. Commit 44cac93464 went into 12, but that's EOL.

Backpatch-through: 13
Discussion: https://postgr.es/m/928bdab1-6567-449f-98c4-339cd2203b87@vondra.me
2025-01-25 02:16:15 +01:00
Peter Eisentraut
155d6162e4 meson: Fix sepgsql installation
The sepgsql.sql file should be installed under share/contrib/, not
share/extension/, since it is not an extension.  This makes it match
what make install does.

Discussion: https://www.postgresql.org/message-id/flat/651a5baf-5c45-4a5a-a202-0c8453a4ebf8@eisentraut.org
2025-01-24 10:55:24 +01:00
Tom Lane
7ab181a29b Don't ask for bug reports about pthread_is_threaded_np() != 0.
We thought that this condition was unreachable in ExitPostmaster,
but actually it's possible if you have both a misconfigured locale
setting and some other mistake that causes PostmasterMain to bail
out before reaching its own check of pthread_is_threaded_np().

Given the lack of other reports, let's not ask for bug reports if
this occurs; instead just give the same hint as in PostmasterMain.

Bug: #18783
Reported-by: anani191181515@gmail.com
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/18783-d1873b95a59b9103@postgresql.org
Discussion: https://postgr.es/m/206317.1737656533@sss.pgh.pa.us
Backpatch-through: 13
2025-01-23 14:23:29 -05:00
Tom Lane
8c57f54853 Repair incorrect handling of AfterTriggerSharedData.ats_modifiedcols.
This patch fixes two distinct errors that both ultimately trace
to commit 71d60e2aa, which added the ats_modifiedcols field.

The more severe error is that ats_modifiedcols wasn't accounted for
in afterTriggerAddEvent's scanning loop that looks for a pre-existing
duplicate AfterTriggerSharedData.  Thus, a new event could be
incorrectly matched to an AfterTriggerSharedData that has a different
value of ats_modifiedcols, resulting in the wrong tg_updatedcols
bitmap getting passed to the trigger whenever it finally gets fired.
We'd not noticed because (a) few triggers consult tg_updatedcols,
and (b) we had no tests exercising a case where such a trigger was
called as an AFTER trigger.  In the test case added by this commit,
contrib/lo's trigger fails to remove a large object when expected
because (without this fix) it thinks the LO OID column hasn't changed.

The other problem was introduced by commit ce5aaea8c, which copied the
modified-columns bitmap into trigger-related storage.  It made a copy
for every trigger event, whereas what we really want is to make a new
copy only when we make a new AfterTriggerSharedData entry.  (We could
imagine adding extra logic to reduce the number of bitmap copies still
more, but it doesn't look worthwhile at the moment.)  In a simple test
of an UPDATE of 10000000 rows with a single AFTER trigger, this thinko
roughly tripled the amount of memory consumed by the pending-triggers
data structures, from 160446744 to 480443440 bytes.

Fixing the first problem requires introducing a bms_equal() call into
afterTriggerAddEvent's scanning loop, which is slightly annoying from
a speed perspective.  However, getting rid of the excessive bms_copy()
calls from the second problem balances that out; overall speed of
trigger operations is the same or slightly better, in my tests.

Discussion: https://postgr.es/m/3496294.1737501591@sss.pgh.pa.us
Backpatch-through: 13
2025-01-22 11:58:20 -05:00
Álvaro Herrera
ddab512ebb
Fix detach of a partition that has a toplevel FK to a partitioned table
In common cases, foreign keys are defined on the toplevel partitioned
table; but if instead one is defined on a partition and references a
partitioned table, and the referencing partition is detached, we would
examine the pg_constraint row on the partition being detached, and fail
to realize that the sub-constraints must be left alone.  This causes the
ALTER TABLE DETACH process to fail with

 ERROR:  could not find ON INSERT check triggers of foreign key constraint NNN

This is similar but not quite the same as what was fixed by
53af9491a043.  This bug doesn't affect branches earlier than 15, because
the detach procedure was different there, so we only backpatch down to
15.

Fix by skipping such modifying constraints that are children of other
constraints being detached.

Author: Amul Sul <sulamul@gmail.com>
Diagnosys-by: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/CAAJ_b97GuPh6wQPbxQS-Zpy16Oh+0aMv-w64QcGrLhCOZZ6p+g@mail.gmail.com
2025-01-21 14:53:46 +01:00
Tom Lane
d62403c512 Update time zone data files to tzdata release 2025a.
DST law changes in Paraguay.
Historical corrections for the Philippines.

Backpatch-through: 13
2025-01-20 16:49:15 -05:00
Tom Lane
5054b7e107 Avoid using timezone Asia/Manila in regression tests.
The freshly-released 2025a version of tzdata has a refined estimate
for the longitude of Manila, changing their value for LMT in
pre-standardized-timezone days.  This changes the output of one of
our test cases.  Since we need to be able to run with system tzdata
files that may or may not contain this update, we'd better stop
making that specific test.

I switched it to use Asia/Singapore, which has a roughly similar UTC
offset.  That LMT value hasn't changed in tzdb since 2003, so we can
hope that it's well established.

I also noticed that this set of make_timestamptz tests only exercises
zones east of Greenwich, which seems rather sad, and was not the
original intent AFAICS.  (We've already changed these tests once
to stabilize their results across tzdata updates, cf 66b737cd9;
it looks like I failed to consider the UTC-offset-sign aspect then.)
To improve that, add a test with Pacific/Honolulu.  That LMT offset
is also quite old in tzdb, so we'll cross our fingers that it doesn't
get improved.

Reported-by: Christoph Berg <cb@df7cb.de>
Discussion: https://postgr.es/m/Z46inkznCxesvDEb@msg.df7cb.de
Backpatch-through: 13
2025-01-20 15:47:53 -05:00
Thomas Munro
b4b52c911a Fix latch event policy that hid socket events.
If a WaitEventSetWait() caller asks for multiple events, an already set
latch would previously prevent other events from being reported at the
same time.  Now, we'll also poll the kernel for other events that would
fit in the caller's output buffer with a zero wait time.  This policy
change doesn't affect callers that ask for only one event.

The main caller affected is the postmaster.  If its latch is set
extremely frequently by backends launching workers and workers exiting,
we don't want it to handle only those jobs and ignore incoming client
connections.

Back-patch to 16 where the postmaster began using the API.  The
fast-return policy changed here is older than that, but doesn't cause
any known problems in earlier releases.

Reported-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/Z1n5UpAiGDmFcMmd%40nathan
2025-01-20 16:55:11 +13:00
Michael Paquier
2c2e1d4f42 Fix header check for continuation records where standbys could be stuck
XLogPageRead() checks immediately for an invalid WAL record header on a
standby, to be able to handle the case of continuation records that need
to be read across two different sources.  As written, the check was too
generic, applying to any target LSN.  Based on an analysis by Kyotaro
Horiguchi, what really matters is to make sure that the page header is
checked when attempting to read a LSN at the boundary of a segment, to
handle the case of a continuation record that spawns across multiple
pages when dealing with multiple segments, as WAL receivers are spawned
they request WAL from the beginning of a segment.  This fix has been
proposed by Kyotaro Horiguchi.

This could cause standbys to loop infinitely when dealing with a
continuation record during a timeline jump, in the case where the
contents of the record in the follow-up page are invalid.

Some regression tests are added to check such scenarios, able to
reproduce the original problem.  In the test, the contents of a
continuation record are overwritten with junk zeros on its follow-up
page, and replayed on standbys.  This is inspired by 039_end_of_wal.pl,
and is enough to show how standbys should react on promotion by not
being stuck.  Without the fix, the test would fail with a timeout.  The
test to reproduce the problem has been written by Alexander Kukushkin.

The original check has been introduced in 066871980183, for a similar
problem.

Author: Kyotaro Horiguchi, Alexander Kukushkin
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/CAFh8B=mozC+e1wGJq0H=0O65goZju+6ab5AU7DEWCSUA2OtwDg@mail.gmail.com
Backpatch-through: 13
2025-01-20 09:30:36 +09:00
Michael Paquier
4d72357c40 Revert recent changes related to handling of 2PC files at recovery
This commit reverts 8f67f994e8ea (down to v13) and c3de0f9eed38 (down to
v17), as these are proving to not be completely correct regarding two
aspects:
- In v17 and newer branches, c3de0f9eed38's check for epoch handling is
incorrect, and does not correctly handle frozen epochs.  A logic closer
to widen_snapshot_xid() should be used.  The 2PC code should try to
integrate deeper with FullTransactionIds, 5a1dfde8334b being not enough.
- In v13 and newer branches, 8f67f994e8ea is a workaround for the real
issue, which is that we should not attempt CLOG lookups without reaching
consistency.  This exists since 728bd991c3c4, and this is reachable with
ProcessTwoPhaseBuffer() called by restoreTwoPhaseData() at the beginning
of recovery.

Per discussion with Noah Misch.

Discussion: https://postgr.es/m/20250116010051.f3.nmisch@google.com
Backpatch-through: 13
2025-01-17 13:27:43 +09:00
Tom Lane
94c02bd334 Fix setrefs.c's failure to do expression processing on prune steps.
We should run the expression subtrees of PartitionedRelPruneInfo
structs through fix_scan_expr.  Failure to do so means that
AlternativeSubPlans within those expressions won't be cleaned up
properly, resulting in "unrecognized node type" errors since v14.

It seems fairly likely that at least some of the other steps done
by fix_scan_expr are important here as well, resulting in as-yet-
undetected bugs.  Therefore, I've chosen to back-patch this to
all supported branches including v13, even though the known
symptom doesn't manifest in v13.

Per bug #18778 from Alexander Lakhin.

Discussion: https://postgr.es/m/18778-24cd399df6c806af@postgresql.org
2025-01-16 20:40:07 -05:00
Michael Paquier
9420f9bb61 Move routines to manipulate WAL into PostgreSQL::Test::Cluster
These facilities were originally in the recovery TAP test
039_end_of_wal.pl.  A follow-up bug fix with a TAP test doing similar
WAL manipulations requires them, and all these had better not be
duplicated due to their complexity.  The routine names are tweaked to
use "wal" more consistently, similarly to the existing "advance_wal".

In v14 and v13, the new routines are moved to PostgresNode.pm.
039_end_of_wal.pl is updated to use the refactored routines, without
changing its coverage.

Reviewed-by: Alexander Kukushkin
Discussion: https://postgr.es/m/CAFh8B=mozC+e1wGJq0H=0O65goZju+6ab5AU7DEWCSUA2OtwDg@mail.gmail.com
Backpatch-through: 13
2025-01-16 09:26:26 +09:00
Tom Lane
b935691b8b Avoid symbol collisions between pqsignal.c and legacy-pqsignal.c.
In the name of ABI stability (that is, to avoid a library major
version bump for libpq), libpq still exports a version of pqsignal()
that we no longer want to use ourselves.  However, since that has
the same link name as the function exported by src/port/pqsignal.c,
there is a link ordering dependency determining which version will
actually get used by code that uses libpq as well as libpgport.a.

It now emerges that the wrong version has been used by pgbench and
psql since commit 06843df4a rearranged their link commands.  This
can result in odd failures in pgbench with the -T switch, since its
SIGALRM handler will now not be marked SA_RESTART.  psql may have
some edge-case problems in \watch, too.

Since we don't want to depend on link ordering effects anymore,
let's fix this in the same spirit as b6c7cfac8: use macros to change
the actual link names of the competing functions.  We cannot change
legacy-pqsignal.c's exported name of course, so the victim has to be
src/port/pqsignal.c.

In master, rename its exported name to be pqsignal_fe in frontend or
pqsignal_be in backend.  (We could perhaps have gotten away with using
the same symbol in both cases, but since the FE and BE versions now
work a little differently, it seems advisable to use different names.)

In back branches, rename to pqsignal_fe in frontend but keep it as
pqsignal in backend.  The frontend change could affect third-party
code that is calling pqsignal from libpgport.a or libpgport_shlib.a,
but only if the code is compiled against port.h from a different minor
release than libpgport.  Since we don't support using libpgport as a
shared library, it seems unlikely that there will be such a problem.
I left the backend symbol unchanged to avoid an ABI break for
extensions.  This means that the link ordering hazard still exists
for any extension that links against libpq.  However, none of our own
extensions use both pqsignal() and libpq, and we're not making things
any worse for third-party extensions that do.

Report from Andy Fan, diagnosis by Fujii Masao, patch by me.
Back-patch to all supported branches, as 06843df4a was.

Discussion: https://postgr.es/m/87msfz5qv2.fsf@163.com
2025-01-14 18:50:24 -05:00
Fujii Masao
5c7c34db25 ecpg: Restore detection of unsupported COPY FROM STDIN.
The ecpg command includes code to warn about unsupported COPY FROM STDIN
statements in input files. However, since commit 3d009e45bd,
this functionality has been broken due to a bug introduced in that commit,
causing ecpg to fail to detect the statement.

This commit resolves the issue, restoring ecpg's ability to detect
COPY FROM STDIN and issue a warning as intended.

Back-patch to all supported versions.

Author: Ryo Kanbayashi
Reviewed-by: Hayato Kuroda, Tom Lane
Discussion: https://postgr.es/m/CANOn0Ez_t5uDCUEV8c1YORMisJiU5wu681eEVZzgKwOeiKhkqQ@mail.gmail.com
2025-01-15 01:25:05 +09:00
Heikki Linnakangas
91fc447c21 Fix catcache invalidation of a list entry that's being built
If a new catalog tuple is inserted that belongs to a catcache list
entry, and cache invalidation happens while the list entry is being
built, the list entry might miss the newly inserted tuple.

To fix, change the way we detect concurrent invalidations while a
catcache entry is being built. Keep a stack of entries that are being
built, and apply cache invalidation to those entries in addition to
the real catcache entries. This is similar to the in-progress list in
relcache.c.

Back-patch to all supported versions.

Reviewed-by: Noah Misch
Discussion: https://www.postgresql.org/message-id/2234dc98-06fe-42ed-b5db-ac17384dc880@iki.fi
2025-01-14 14:41:01 +02:00
Michael Paquier
bfda7d8dd6 Fix potential integer overflow in bringetbitmap()
This function expects an "int64" as result and stores the number of
pages to add to the index scan bitmap as an "int", multiplying its final
result by 10.  For a relation large enough, this can theoretically
overflow if counting more than (INT32_MAX / 10) pages, knowing that the
number of pages is upper-bounded by MaxBlockNumber.

To avoid the overflow, this commit redefines "totalpages", used to
calculate the result, to be an "int64" rather than an "int".

Reported-by: Evgeniy Gorbanyov
Author: James Hunter
Discussion: https://www.postgresql.org/message-id/07704817-6fa0-460c-b1cf-cd18f7647041@basealt.ru
Backpatch-through: 13
2025-01-14 15:13:15 +09:00
Daniel Gustafsson
116036d811 Fix HBA option count
Commit 27a1f8d108 missed updating the max HBA option count to
account for the new option added.  Fix by bumping the counter
and adjust the relevant comment to match.  Backpatch down to
all supported branches like the erroneous commit.

Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/286764.1736697356@sss.pgh.pa.us
Backpatch-through: v13
2025-01-12 23:44:39 +01:00
Dean Rasheed
77763f3bef Fix XMLTABLE() deparsing to quote namespace names if necessary.
When deparsing an XMLTABLE() expression, XML namespace names were not
quoted. However, since they are parsed as ColLabel tokens, some names
require double quotes to ensure that they are properly interpreted.
Fix by using quote_identifier() in the deparsing code.

Back-patch to all supported versions.

Dean Rasheed, reviewed by Tom Lane.

Discussion: https://postgr.es/m/CAEZATCXTpAS%3DncfLNTZ7YS6O5puHeLg_SUYAit%2Bcs7wsrd9Msg%40mail.gmail.com
2025-01-12 12:58:14 +00:00
Tom Lane
33a4e656dc Repair memory leaks in plpython.
PLy_spi_execute_plan (PLyPlan.execute) and PLy_cursor_plan
(plpy.cursor) use PLy_output_convert to convert Python values
into Datums that can be passed to the query-to-execute.  But they
failed to pay much attention to its warning that it can leave "cruft
generated along the way" behind.  Repeated use of these methods can
result in a substantial memory leak for the duration of the calling
plpython function.

To fix, make a temporary memory context to invoke PLy_output_convert
in.  This also lets us get rid of the rather fragile code that was
here for retail pfree's of the converted Datums.  Indeed, we don't
need the PLyPlanObject.values field anymore at all, though I left it
in place in the back branches in the name of ABI stability.

Mat Arye and Tom Lane, per report from Mat Arye.  Back-patch to all
supported branches.

Discussion: https://postgr.es/m/CADsUR0DvVgnZYWwnmKRK65MZg7YLUSTDLV61qdnrwtrAJgU6xw@mail.gmail.com
2025-01-11 11:45:56 -05:00
Daniel Gustafsson
c35bbdfbc0 Fix missing ldapscheme option in pg_hba_file_rules()
The ldapscheme option was missed when inspecing the HbaLine for
assembling rows for the pg_hba_file_rules function.  Backpatch
to all supported versions.

Author: Laurenz Albe <laurenz.albe@cybertec.at>
Reported-by: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Bug: 18769
Discussion: https://postgr.es/m/18769-dd8610cbc0405172@postgresql.org
Backpatch-through: v13
2025-01-10 22:02:58 +01:00
Nathan Bossart
0b5927dfac Fix an ALTER GROUP ... DROP USER error message.
This error message stated the privileges required to add a member
to a group even if the user was trying to drop a member:

	postgres=> alter group a drop user b;
	ERROR:  permission denied to alter role
	DETAIL:  Only roles with the ADMIN option on role "a" may add members.

Since the required privileges for both operations are the same, we
can fix this by modifying the message to mention both adding and
dropping members:

	postgres=> alter group a drop user b;
	ERROR:  permission denied to alter role
	DETAIL:  Only roles with the ADMIN option on role "a" may add or drop members.

Author: ChangAo Chen
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/tencent_FAA0D00E3514AAF0BBB6322542A6094FEF05%40qq.com
Backpatch-through: 16
2025-01-09 17:10:13 -06:00
Thomas Munro
be7489662e Fix off_t overflow in pg_basebackup on Windows.
walmethods.c used off_t to navigate around a pg_wal.tar file that could
exceed 2GB, which doesn't work on Windows and would fail with misleading
errors.  Use pgoff_t instead.

Back-patch to all supported branches.

Author: Davinder Singh <davinder.singh@enterprisedb.com>
Reported-by: Jakub Wartak <jakub.wartak@enterprisedb.com>
Discussion: https://postgr.es/m/CAKZiRmyM4YnokK6Oenw5JKwAQ3rhP0YTz2T-tiw5dAQjGRXE3Q%40mail.gmail.com
2025-01-09 16:05:08 +13:00
Thomas Munro
0bff6f1da8 Provide 64-bit ftruncate() and lseek() on Windows.
Change our ftruncate() macro to use the 64-bit variant of chsize(), and
add a new macro to redirect lseek() to _lseeki64().

Back-patch to all supported releases, in preparation for a bug fix.

Tested-by: Davinder Singh <davinder.singh@enterprisedb.com>
Discussion: https://postgr.es/m/CAKZiRmyM4YnokK6Oenw5JKwAQ3rhP0YTz2T-tiw5dAQjGRXE3Q%40mail.gmail.com
2025-01-09 14:59:47 +13:00
Thomas Munro
9defaaa1da Fix C error reported by Oracle compiler.
Commit 66aaabe7 (branches 13 - 17 only) was not acceptable to the Oracle
Developer Studio compiler on build farm animal wrasse.  It accidentally
used a C++ style return statement to wrap a void function.  None of the
usual compilers complained, but it is right, that is not allowed in C.
Fix.

Reported-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/Z33vgfVgvOnbFLN9%40paquier.xyz
2025-01-08 17:03:25 +13:00
Thomas Munro
c957d7444f Restore smgrtruncate() prototype in back-branches.
It's possible that external code is calling smgrtruncate().  Any
external callers might like to consider the recent changes to
RelationTruncate(), but commit 38c579b0 should not have changed the
function prototype in the back-branches, per ABI stability policy.

Restore smgrtruncate()'s traditional argument list in the back-branches,
but make it a wrapper for a new function smgrtruncate2().  The three
callers in core can use smgrtruncate2() directly.  In master (18-to-be),
smgrtruncate2() is effectively renamed to smgrtruncate(), so this wart
is cleaned up.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CA%2BhUKG%2BThae6x6%2BjmQiuALQBT2Ae1ChjMh1%3DkMvJ8y_SBJZrvA%40mail.gmail.com
2025-01-08 10:46:45 +13:00
Andrew Dunstan
653729ce0e Document strange jsonb sort order for empty top level arrays
Slightly faulty logic in the original jsonb code (commit d9134d0a355)
results in an empty top level array sorting less than a json null. We
can't change the sort order now since it would affect btree indexes over
jsonb, so document the anomaly.

Backpatch to all live branches (13 .. 17)

In master, also add a code comment noting the anomaly.

Reported-by: Yan Chengpen
Reviewed-by: Jian He

Discussion: https://postgr.es/m/OSBPR01MB45199DD8DA2D1CECD50518188E272@OSBPR01MB4519.jpnprd01.prod.outlook.com
2025-01-03 10:29:03 -05:00
Richard Guo
a1a9120c79 Ignore nullingrels when looking up statistics
When looking up statistical data about an expression, we do not need
to concern ourselves with the outer joins that could null the
Vars/PHVs contained in the expression.  Accounting for nullingrels in
the expression could cause estimate_num_groups to count the same Var
multiple times if it's marked with different nullingrels.  This is
incorrect, and could lead to "ERROR:  corrupt MVNDistinct entry" when
searching for multivariate n-distinct.

Furthermore, the nullingrels could prevent us from matching an
expression to expressional index columns or to the expressions in
extended statistics, leading to inaccurate estimates.

To fix, strip out all the nullingrels from the expression before we
look up statistical data about it.  There is one ensuing plan change
in the regression tests, but it looks reasonable and does not
compromise its original purpose.

This patch could result in plan changes, but it fixes an actual bug,
so back-patch to v16 where the outer-join-aware-Var infrastructure was
introduced.

Author: Richard Guo
Discussion: https://postgr.es/m/CAMbWs4-2Z4k+nFTiZe0Qbu5n8juUWenDAtMzi98bAZQtwHx0-w@mail.gmail.com
2025-01-02 18:02:02 +09:00
Bruce Momjian
c4f8291cf0 Update copyright for 2025
Backpatch-through: 13
2025-01-01 11:21:54 -05:00
Michael Paquier
e3a27fd06d Fix memory leak in pgoutput with relation attribute map
pgoutput caches the attribute map of a relation, that is free()'d only
when validating a RelationSyncEntry.  However, this code path is not
taken when calling any of the SQL functions able to do some logical
decoding, like pg_logical_slot_{get,peek}_changes(), leaking some memory
into CacheMemoryContext on repeated calls.

To address this, a relation's attribute map is allocated in
PGOutputData's cachectx, free()'d at the end of the execution of these
SQL functions when logical decoding ends.  This is available down to 15.
v13 and v14 have a similar leak, which will be dealt with later.

Reported-by: Masahiko Sawada
Author: Vignesh C
Reviewed-by: Hou Zhijie
Discussion: https://postgr.es/m/CAD21AoDkAhQVSukOfH3_reuF-j4EU0-HxMqU3dU+bSTxsqT14Q@mail.gmail.com
Discussion: https://postgr.es/m/CALDaNm1hewNAsZ_e6FF52a=9drmkRJxtEPrzCB6-9mkJyeBBqA@mail.gmail.com
Backpatch-through: 15
2024-12-30 13:33:59 +09:00
Michael Paquier
359f31c638 Fix handling of orphaned 2PC files in the future at recovery
Before 728bd991c3c4, that has improved the support for 2PC files during
recovery, the initial logic scanning files in pg_twophase was done so as
files in the future of the transaction ID horizon were checked first,
followed by a check if a transaction ID is aborted or committed which
could involve a pg_xact lookup.  After this commit, these checks have
been done in reverse order.

Files detected as in the future do not have a state that can be checked
in pg_xact, hence this caused recovery to fail abruptly should an
orphaned 2PC file in the future of the transaction ID horizon exist in
pg_twophase at the beginning of recovery.

A test is added to check for this scenario, using an empty 2PC with a
transaction ID large enough to be in the future when running the test.
This test is added in 16 and older versions for now.  17 and newer
versions are impacted by a second bug caused by the addition of the
epoch in the 2PC file names.  An equivalent test will be added in these
branches in a follow-up commit, once the second set of issues reported
are fixed.

Author: Vitaly Davydov, Michael Paquier
Discussion: https://postgr.es/m/11e597-676ab680-8d-374f23c0@145466129
Backpatch-through: 13
2024-12-30 08:06:42 +09:00
Tom Lane
41a252c2ca Exclude parallel workers from connection privilege/limit checks.
Cause parallel workers to not check datallowconn, rolcanlogin, and
ACL_CONNECT privileges.  The leader already checked these things
(except for rolcanlogin which might have been checked for a different
role).  Re-checking can accomplish little except to induce unexpected
failures in applications that might not even be aware that their query
has been parallelized.  We already had the principle that parallel
workers rely on their leader to pass a valid set of authorization
information, so this change just extends that a bit further.

Also, modify the ReservedConnections, datconnlimit and rolconnlimit
logic so that these limits are only enforced against regular backends,
and only regular backends are counted while checking if the limits
were already reached.  Previously, background processes that had an
assigned database or role were subject to these limits (with rather
random exclusions for autovac workers and walsenders), and the set of
existing processes that counted against each limit was quite haphazard
as well.  The point of these limits, AFAICS, is to ensure the
availability of PGPROC slots for regular backends.  Since all other
types of processes have their own separate pools of PGPROC slots, it
makes no sense either to enforce these limits against them or to count
them while enforcing the limit.

While edge-case failures of these sorts have been possible for a
long time, the problem got a good deal worse with commit 5a2fed911
(CVE-2024-10978), which caused parallel workers to make some of these
checks using the leader's current role where before we had used its
AuthenticatedUserId, thus allowing parallel queries to fail after
SET ROLE.  The previous behavior was fairly accidental and I have
no desire to return to it.

This patch includes reverting 73c9f91a1, which was an emergency hack
to suppress these same checks in some cases.  It wasn't complete,
as shown by a recent bug report from Laurenz Albe.  We can also revert
fd4d93d26 and 492217301, which hacked around the same problems in one
regression test.

In passing, remove the special case for autovac workers in
CheckMyDatabase; it seems cleaner to have AutoVacWorkerMain pass
the INIT_PG_OVERRIDE_ALLOW_CONNS flag, now that that does what's
needed.

Like 5a2fed911, back-patch to supported branches (which sadly no
longer includes v12).

Discussion: https://postgr.es/m/1808397.1735156190@sss.pgh.pa.us
2024-12-28 16:08:50 -05:00
Noah Misch
5d94aa4dc3 In REASSIGN OWNED of a database, lock the tuple as mandated.
Commit aac2c9b4fde889d13f859c233c2523345e72d32b mandated such locking
and attempted to fulfill that mandate, but it missed REASSIGN OWNED.
Hence, it remained possible to lose VACUUM's inplace update of
datfrozenxid if a REASSIGN OWNED processed that database at the same
time.  This didn't affect the other inplace-updated catalog, pg_class.
For pg_class, REASSIGN OWNED calls ATExecChangeOwner() instead of the
generic AlterObjectOwner_internal(), and ATExecChangeOwner() fulfills
the locking mandate.

Like in GRANT, implement this by following the locking protocol for any
catalog subject to the generic AlterObjectOwner_internal().  It would
suffice to do this for IsInplaceUpdateOid() catalogs only.  Back-patch
to v13 (all supported versions).

Kirill Reshke.  Reported by Alexander Kukushkin.

Discussion: https://postgr.es/m/CAFh8B=mpKjAy4Cuun-HP-f_vRzh2HSvYFG3rhVfYbfEBUhBAGg@mail.gmail.com
2024-12-28 07:16:26 -08:00
Heikki Linnakangas
643efb18bf meson: Export all libcommon functions in Windows builds
This fixes "unresolved external symbol" errors with extensions that
use functions from libpgport that need special CFLAGS to
compile. Currently, that includes the CRC-32 functions.

Commit 2571c1d5cc did this for libcommon, but I missed that libpqport
has the same issue.

Reported-by: Tom Lane
Backpatch-through: 16, where Meson was introduced
Discussion: https://www.postgresql.org/message-id/CAOdR5yF0krWrxycA04rgUKCgKugRvGWzzGLAhDZ9bzNv8g0Lag@mail.gmail.com
2024-12-25 19:22:41 +02:00
Heikki Linnakangas
4e0d71ff22 meson: Export all libcommon functions in Windows builds
This fixes "unresolved external symbol" errors with extensions that
use functions from libcommon. This was reported with pgvector.

Reported-by: Andrew Kane
Author: Vladlen Popolitov
Backpatch-through: 16, where Meson was introduced
Discussion: https://www.postgresql.org/message-id/CAOdR5yF0krWrxycA04rgUKCgKugRvGWzzGLAhDZ9bzNv8g0Lag@mail.gmail.com
2024-12-25 18:14:30 +02:00
Michael Paquier
4d45e7490c Fix memory leak in pgoutput with publication list cache
The pgoutput module caches publication names in a list and frees it upon
invalidation.  However, the code forgot to free the actual publication
names within the list elements, as publication names are pstrdup()'d in
GetPublication().  This would cause memory to leak in
CacheMemoryContext, bloating it over time as this context is not
cleaned.

This is a problem for WAL senders running for a long time, as an
accumulation of invalidation requests would bloat its cache memory
usage.  A second case, where this leak is easier to see, involves a
backend calling SQL functions like pg_logical_slot_{get,peek}_changes()
which create a new decoding context with each execution.  More
publications create more bloat.

To address this, this commit adds a new memory context within the
logical decoding context and resets it each time the publication names
cache is invalidated, based on a suggestion from Amit Kapila.  This
ensures that the lifespan of the publication names aligns with that of
the logical decoding context.

Contrary to the HEAD-only commit f0c569d71515 that has changed
PGOutputData to track this new child memory context, the context is
tracked with a static variable whose state is reset with a MemoryContext
reset callback attached to PGOutputData->context, so as ABI
compatibility is preserved in stable branches.  This approach is based
on an suggestion from Amit Kapila.

Analyzed-by: Michael Paquier, Jeff Davis
Author: Masahiko Sawada
Reviewed-by: Amit Kapila, Michael Paquier, Euler Taveira, Hou Zhijie
Discussion: https://postgr.es/m/Z0khf9EVMVLOc_YY@paquier.xyz
Backpatch-through: 13
2024-12-23 12:48:08 +09:00
Heikki Linnakangas
9d8ab2c461 Update TransactionXmin when MyProc->xmin is updated
GetSnapshotData() set TransactionXmin = MyProc->xmin, but when
SnapshotResetXmin() advanced MyProc->xmin, it did not advance
TransactionXmin correspondingly. That meant that TransactionXmin could
be older than MyProc->xmin, and XIDs between than TransactionXmin and
the real MyProc->xmin could be vacuumed away. One known consequence is
in pg_subtrans lookups: we might try to look up the status of an XID
that was already truncated away.

Back-patch to all supported versions.

Reviewed-by: Andres Freund
Discussion: https://www.postgresql.org/message-id/d27a046d-a1e4-47d1-a95c-fbabe41debb4@iki.fi
2024-12-21 23:42:55 +02:00
Thomas Munro
ba02d24bac Fix corruption when relation truncation fails.
RelationTruncate() does three things, while holding an
AccessExclusiveLock and preventing checkpoints:

1. Logs the truncation.
2. Drops buffers, even if they're dirty.
3. Truncates some number of files.

Step 2 could previously be canceled if it had to wait for I/O, and step
3 could and still can fail in file APIs.  All orderings of these
operations have data corruption hazards if interrupted, so we can't give
up until the whole operation is done.  When dirty pages were discarded
but the corresponding blocks were left on disk due to ERROR, old page
versions could come back from disk, reviving deleted data (see
pgsql-bugs #18146 and several like it).  When primary and standby were
allowed to disagree on relation size, standbys could panic (see
pgsql-bugs #18426) or revive data unknown to visibility management on
the primary (theorized).

Changes:

 * WAL is now unconditionally flushed first
 * smgrtruncate() is now called in a critical section, preventing
   interrupts and causing PANIC on file API failure
 * smgrtruncate() has a new parameter for existing fork sizes,
   because it can't call smgrnblocks() itself inside a critical section

The changes apply to RelationTruncate(), smgr_redo() and
pg_truncate_visibility_map().  That last is also brought up to date with
other evolutions of the truncation protocol.

The VACUUM FileTruncate() failure mode had been discussed in older
reports than the ones referenced below, with independent analysis from
many people, but earlier theories on how to fix it were too complicated
to back-patch.  The more recently invented cancellation bug was
diagnosed by Alexander Lakhin.  Other corruption scenarios were spotted
by me while iterating on this patch and earlier commit 75818b3a.

Back-patch to all supported releases.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reported-by: rootcause000@gmail.com
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/18146-04e908c662113ad5%40postgresql.org
Discussion: https://postgr.es/m/18426-2d18da6586f152d6%40postgresql.org
2024-12-20 23:57:26 +13:00
David Rowley
093fc156b0 Fix Assert failure in WITH RECURSIVE UNION queries
If the non-recursive part of a recursive CTE ended up using
TTSOpsBufferHeapTuple as the table slot type, then a duplicate value
could cause an Assert failure in CheckOpSlotCompatibility() when
checking the hash table for the duplicate value.  The expected slot type
for the deform step was TTSOpsMinimalTuple so the Assert failed when the
TTSOpsBufferHeapTuple slot was used.

This is a long-standing bug which we likely didn't notice because it
seems much more likely that the non-recursive term would have required
projection and used a TTSOpsVirtual slot, which CheckOpSlotCompatibility
is ok with.

There doesn't seem to be any harm done here other than the Assert
failure.  Both TTSOpsMinimalTuple and TTSOpsBufferHeapTuple slot types
require tuple deformation, so the EEOP_*_FETCHSOME ExprState step would
have properly existed in the ExprState.

The solution is to pass NULL for the ExecBuildGroupingEqual's 'lops'
parameter.  This means the ExprState's EEOP_*_FETCHSOME step won't
expect a fixed slot type.  This makes CheckOpSlotCompatibility() happy as
no checking is performed when the ExprEvalStep is not expecting a fixed
slot type.

Reported-by: Richard Guo
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAMbWs4-8U9q2LAtf8+ghV11zeUReA3AmrYkxzBEv0vKnDxwkKA@mail.gmail.com
Backpatch-through: 13, all supported versions
2024-12-19 13:12:41 +13:00
Tom Lane
8cfff087b5 Fix memory leak in pg_restore with zstd-compressed data.
EndCompressorZstd() neglected to free everything.  This was
most visible with a lot of large objects in the dump.

Per report from Tomasz Szypowski.  Back-patch to v16
where this code came in.

Discussion: https://postgr.es/m/DU0PR04MB94193D038A128EF989F922D199042@DU0PR04MB9419.eurprd04.prod.outlook.com
2024-12-17 22:31:26 -05:00
Nathan Bossart
2a74023221 Accommodate very large dshash tables.
If a dshash table grows very large (e.g., the dshash table for
cumulative statistics when there are millions of tables), resizing
it may fail with an error like:

	ERROR: invalid DSA memory alloc request size 1073741824

To fix, permit dshash resizing to allocate more than 1 GB by
providing the DSA_ALLOC_HUGE flag.

Reported-by: Andreas Scherbaum
Author: Matthias van de Meent
Reviewed-by: Cédric Villemain, Michael Paquier, Andres Freund
Discussion: https://postgr.es/m/80a12d59-0d5e-4c54-866c-e69cd6536471%40pgug.de
Backpatch-through: 13
2024-12-17 15:24:45 -06:00
Heikki Linnakangas
3231cb57d8 Make 009_twophase.pl test pass with recovery_min_apply_delay set
The test failed if you ran the regression tests with TEMP_CONFIG with
recovery_min_apply_delay = '500ms'. Fix the race condition by waiting
for transaction to be applied in the replica, like in a few other
tests.

The failing test was introduced in commit cbfbda7841. Backpatch to all
supported versions like that commit (except v12, which is no longer
supported).

Reported-by: Alexander Lakhin
Discussion: https://www.postgresql.org/message-id/09e2a70a-a6c2-4b5c-aeae-040a7449c9f2@gmail.com
2024-12-16 15:59:31 +02:00
Tom Lane
076b091230 pgbench: fix misprocessing of some nested \if constructs.
An \if command appearing within a false (not-to-be-executed) \if
branch was incorrectly treated the same as \elif.  This could allow
statements within the inner \if to be executed when they should
not be.  Also the missing inner \if stack entry would result in an
assertion failure (in assert-enabled builds) when the final \endif
is reached.

Report and patch by Michail Nikolaev.  Back-patch to all
supported branches.

Discussion: https://postgr.es/m/CANtu0oiA1ke=SP6tauhNqkUdv5QFsJtS1p=aOOf_iU+EhyKkjQ@mail.gmail.com
2024-12-15 14:14:15 -05:00
Tom Lane
31daa10fac contrib/earthdistance: Use SQL-standard function bodies.
The @extschema:name@ feature added by 72a5b1fc8 allows us to
make earthdistance's references to the cube extension fully
search-path-secure, so long as all those references are
resolved at extension installation time not runtime.
To do that, we must convert earthdistance's SQL functions to
the new SQL-standard style; but we wanted to do that anyway.

The functions can be updated in our customary style by running
CREATE OR REPLACE FUNCTION in an extension update script.
However, there's still problems in the "CREATE DOMAIN earth"
command: its references to cube functions could be captured
by hostile objects in earthdistance's installation schema,
if that's not where the cube extension is.  Worse, the reference
to the cube type itself as the domain's base could be captured,
and that's not something we could fix after-the-fact in the
update script.

What I've done about that is to change the "CREATE DOMAIN earth"
command in the base script earthdistance--1.1.sql.  Ordinarily,
changing a released extension script is forbidden; but I think
it's okay here since the results of successful (non-trojaned)
script execution will be identical to before.

A good deal of care is still needed to make the extension's scripts
proof against search-path-based attacks.  We have to make sure that
all the function and operator invocations have exact argument-type
matches, to forestall attacks based on supplying a better match.
Fortunately earthdistance isn't very big, so I've just gone through
it and inspected each call to be sure of that.  The only actual code
changes needed were to spell all floating-point constants in the style
'-1'::float8, rather than depending on runtime type conversions and/or
negations.  (I'm not sure that the shortcuts previously used were
attackable, but removing run-time effort is a good thing anyway.)

I believe that this fixes earthdistance enough that we could
mark it trusted and remove the warnings about it that were
added by 7eeb1d986; but I've not done that here.

The primary reason for dealing with this now is that we've
received reports of pg_upgrade failing for databases that use
earthdistance functions in contexts like generated columns.
That's a consequence of 2af07e2f7 having restricted the search_path
used while evaluating such expressions.  The only way to fix that
is to make the earthdistance functions independent of run-time
search_path.  This patch is very much nicer than the alternative of
attaching "SET search_path" clauses to earthdistance's functions:
it is more secure and doesn't create a run-time penalty.  Therefore,
I've chosen to back-patch this to v16 where @extschema:name@
was added.  It won't help unless users update to 16.7 and issue
"ALTER EXTENSION earthdistance UPDATE" before upgrading to 17,
but at least there's now a way to deal with the problem without
manual intervention in the dump/restore process.

Tom Lane and Ronan Dunklau

Discussion: https://postgr.es/m/3316564.aeNJFYEL58@aivenlaptop
Discussion: https://postgr.es/m/6a6439f1-8039-44e2-8fb9-59028f7f2014@mailbox.org
2024-12-14 16:07:18 -05:00
Tom Lane
782cc1aa3d Fix possible crash in pg_dump with identity sequences.
If an owned sequence is considered interesting, force its owning
table to be marked interesting too.  This ensures, in particular,
that we'll fetch the owning table's column names so we have the
data needed for ALTER TABLE ... ADD GENERATED.  Previously there were
edge cases where pg_dump could get SIGSEGV due to not having filled in
the column names.  (The known case is where the owning table has been
made part of an extension while its identity sequence is not a member;
but there may be others.)

Also, if it's an identity sequence, force its dumped-components mask
to exactly match the owning table: dump definition only if we're
dumping the table's definition, dump data only if we're dumping the
table's data, etc.  This generalizes the code introduced in commit
b965f2617 that set the sequence's dump mask to NONE if the owning
table's mask is NONE.  That's insufficient to prevent failures,
because for example the table's mask might only request dumping ACLs,
which would lead us to still emit ALTER TABLE ADD GENERATED even
though we didn't create the table.  It seems better to treat an
identity sequence as though it were an inseparable aspect of the
table, matching the treatment used in the backend's dependency logic.
Perhaps this policy needs additional refinement, but let's wait to
see some field use-cases before changing it further.

While here, add a comment in pg_dump.h warning against writing tests
like "if (dobj->dump == DUMP_COMPONENT_NONE)", which was a bug in this
case.  There is one other example in getPublicationNamespaces, which
if it's not a bug is at least remarkably unclear and under-documented.
Changing that requires a separate discussion, however.

Per report from Artur Zakirov.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/CAKNkYnwXFBf136=u9UqUxFUVagevLQJ=zGd5BsLhCsatDvQsKQ@mail.gmail.com
2024-12-13 14:21:36 -05:00