1
0
mirror of https://github.com/postgres/postgres.git synced 2025-06-29 10:41:53 +03:00
Commit Graph

53541 Commits

Author SHA1 Message Date
c75c830e23 Fix broken handling of domains in atthasmissing logic.
If a domain type has a default, adding a column of that type (without
any explicit DEFAULT clause) failed to install the domain's default
value in existing rows, instead leaving the new column null.  This
is unexpected, and it used to work correctly before v11.  The cause
is confusion in the atthasmissing mechanism about which default value
to install: we'd only consider installing an explicitly-specified
default, and then we'd decide that no table rewrite is needed.

To fix, take the responsibility for filling attmissingval out of
StoreAttrDefault, and instead put it into ATExecAddColumn's existing
logic that derives the correct value to fill the new column with.
Also, centralize the logic that determines the need for
default-related table rewriting there, instead of spreading it over
four or five places.

In the back branches, we'll leave the attmissingval-filling code
in StoreAttrDefault even though it's now dead, for fear that some
extension may be depending on that functionality to exist there.
A separate HEAD-only patch will clean up the now-useless code.

Reported-by: jian he <jian.universality@gmail.com>
Author: jian he <jian.universality@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CACJufxHFssPvkP1we7WMhPD_1kwgbG52o=kQgL+TnVoX5LOyCQ@mail.gmail.com
Backpatch-through: 13
2025-03-03 12:43:29 -05:00
76fbb38ef6 Fix pg_strtof() to not crash on NULL endptr.
We had managed not to notice this simple oversight because none
of our calls exercised the case --- until commit 8f427187d.
That led to pg_dump crashing on any platform that uses this code
(currently Cygwin and Mingw).

Even though there's no immediate bug in the back branches, backpatch,
because a non-POSIX-compliant strtof() substitute is trouble waiting
to happen for extensions or future back-patches.

Diagnosed-by: Alexander Lakhin <exclusion@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/339b3902-4e98-4e31-a744-94e43b7b9292@gmail.com
Backpatch-through: 13
2025-03-01 14:22:56 -05:00
35a591a048 pg_amcheck: Fix inconsistency in memory freeing
The function in charge of freeing the memory from a result created by
PQescapeIdentifier() has to be PQfreemem(), to ensure that both
allocation and free come from libpq, but one spot in pg_amcheck was
missing that.

Oversight in b859d94c63.

Author: Ranier Vilela <ranier.vf@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Discussion: https://postgr.es/m/CAEudQArD_nKSnYCNUZiPPsJ2tNXgRmLbXGSOrH1vpOF_XtP0Vg@mail.gmail.com
Discussion: https://postgr.es/m/CAEudQArbTWVSbxq608GRmXJjnNSQ0B6R7CSffNnj2hPWMUsRNg@mail.gmail.com
Backpatch-through: 14
2025-02-27 14:06:00 +09:00
ce1475acd5 Fix pg_dumpall to cope with dangling OIDs in pg_auth_members.
There is a race condition between "GRANT role" and "DROP ROLE",
which allows GRANT to install pg_auth_members entries that refer to
dropped roles.  (Commit 6566133c5 prevented that for the grantor
field, but not for the granted or grantee roles.)  We'll soon fix
that, at least in HEAD, but pg_dumpall needs to cope with the
situation in case of pre-existing inconsistency.  As pg_dumpall
stands, it will emit invalid commands like 'GRANT foo TO ""',
which causes pg_upgrade to fail.  Fix it to emit warnings and skip
those GRANTs, instead.

There was some discussion of removing the problem by changing
dumpRoleMembership's query to use JOIN not LEFT JOIN, but that
would result in silently ignoring such entries.  It seems better
to produce a warning.

Pre-v16 branches already coped with dangling grantor OIDs by simply
omitting the GRANTED BY clause.  I left that behavior as-is, although
it's somewhat inconsistent with the behavior of later branches.

Reported-by: Virender Singla <virender.cse@gmail.com>
Discussion: https://postgr.es/m/CAM6Zo8woa62ZFHtMKox6a4jb8qQ=w87R2L0K8347iE-juQL2EA@mail.gmail.com
Backpatch-through: 13
2025-02-21 13:37:16 -05:00
b91326f372 doc: clarify default checksum behavior in non-master branches
Also simplify and correct data checksum wording in master now that it is
the default.  PG 13 did not have the awkward wording.

Reported-by: Felix <afripowered@gmail.com>

Reviewed-by: Laurenz Albe

Discussion: https://postgr.es/m/173928241056.707.3989867022954178032@wrigleys.postgresql.org

Backpatch-through: 14
2025-02-21 13:03:29 -05:00
cecbb131e5 test_escape: Fix output of --help
The short option name -f was not listed, only its long option name
--force-unsupported.

Author: Japin Li
Discussion: https://postgr.es/m/ME0P300MB04452BD1FB1B277D4C1C20B9B6C52@ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM
Backpatch-through: 13
2025-02-20 09:31:06 +09:00
c21ebcca79 tests: BackgroundPsql: Fix potential for lost errors on windows
This addresses various corner cases in BackgroundPsql:

- On windows stdout and stderr may arrive out of order, leading to errors not
  being reported, or attributed to the wrong statement.

  To fix, emit the "query-separation banner" on both stdout and stderr and
  wait for both.

- Very occasionally the "query-separation banner" would not get removed, because
  we waited until the banner arrived, but then replaced the banner plus
  newline.

  To fix, wait for banner and newline.

- For interactive psql replacing $banner\n is not sufficient, interactive psql
  outputs \r\n.

- For interactive psql, where commands are echoed to stdout, the \echo
  command, rather than its output, would be matched.

  This would sometimes lead to output from the prior query, or wait_connect(),
  being returned in the next command.

  This also affected wait_connect(), leading to sometimes sending queries to
  psql before the connection actually was established.

While debugging these issues I also found that it's hard to know whether a
query separation banner was attributed to the right query. Make that easier by
counting the queries each BackgroundPsql instance has emitted and include the
number in the banner.

Also emit psql stdout/stderr in query() and wait_connect() as Test::More
notes, without that it's rather hard to debug some issues in CI and buildfarm.

As this can cause issues not just to-be-added tests, but also existing ones,
backpatch the fix to all supported versions.

Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/wmovm6xcbwh7twdtymxuboaoarbvwj2haasd3sikzlb3dkgz76@n45rzycluzft
Backpatch-through: 13
2025-02-19 10:46:47 -05:00
7c07ab62ae backport: Extend background_psql() to be able to start asynchronously
This is a backport of ba08edb065. Originally it was only applied to master,
but I (Andres) am planning to fix a few bugs in BackgroundPsql, which would be
somewhat harder with the behavioural differences across branches. It's also
generally good for test infrastructure to behave similarly across branches, to
avoid pain during backpatching.

Discussion: https://postgr.es/m/ilcctzb5ju2gulvnadjmhgatnkxsdpac652byb2u3d3wqziyvx@fbuqcglker46

Michael's original commit message:

This commit extends the constructor routine of BackgroundPsql.pm with a
new "wait" parameter.  If set to 0, the routine returns without waiting
for psql to start, ready to consume input.

background_psql() in Cluster.pm gains the same "wait" parameter.  The
default behavior is still to wait for psql to start.  It becomes now
possible to not wait, giving to TAP scripts the possibility to perform
actions between a BackgroundPsql startup and its wait_connect() call.

Author: Jacob Champion
Discussion: https://postgr.es/m/CAOYmi+=60deN20WDyCoHCiecgivJxr=98s7s7-C8SkXwrCfHXg@mail.gmail.com
2025-02-19 10:11:14 -05:00
3170aece14 backport: Improve handling of empty query results in BackgroundPsql
This is a backport of 70291a3c66. Originally it was only applied to master,
but I (Andres) am planning to fix a few bugs in BackgroundPsql that are harder
to fix in the backbranches with the old behavior. It's also generally good for
test infrastructure to behave similarly across branches, to avoid pain during
backpatching.  70291a3c66 changes the behavior in some cases, but after
discussing it, we are ok with that, it seems unlikely that there are
out-of-core tests relying on the prior behavior.

Discussion: https://postgr.es/m/ilcctzb5ju2gulvnadjmhgatnkxsdpac652byb2u3d3wqziyvx@fbuqcglker46

Michael's original commit message:

A newline is not added at the end of an empty query result, causing the
banner of the hardcoded \echo to not be discarded.  This would reflect
on scripts that expect an empty result by showing the "QUERY_SEPARATOR"
in the output returned back to the caller, which was confusing.

This commit changes BackgroundPsql::query() so as empty results are able
to work correctly, making the first newline before the banner optional,
bringing more flexibility.

Note that this change affects 037_invalid_database.pl, where three
queries generated an empty result, with the script relying on the data
from the hardcoded banner to exist in the expected output.  These
queries are changed to use query_safe(), leading to a simpler script.

The author has also proposed a test in a different patch where empty
results would exist when using BackgroundPsql.

Author: Jacob Champion
Reviewed-by: Andrew Dunstan, Michael Paquier
Discussion: https://postgr.es/m/CAOYmi+=60deN20WDyCoHCiecgivJxr=98s7s7-C8SkXwrCfHXg@mail.gmail.com
2025-02-19 09:45:29 -05:00
8388cae552 Avoid null pointer dereference crash after OOM in Snowball stemmers.
Absorb upstream bug fix (their commit
e322673a841d9abd69994ae8cd20e191090b6ef4), which prevents a null
pointer dereference crash if SN_create_env() gets a malloc failure
at just the wrong point.

Thanks to Maksim Korotkov for discovering the null-pointer
bug and submitting the fix to upstream snowball.

Reported-by: Maksim Korotkov <m.korotkov@postgrespro.ru>
Author: Maksim Korotkov <m.korotkov@postgrespro.ru>
Discussion: https://postgr.es/m/1d1a46-67ab1000-21-80c451@83151435
Backpatch-through: 13
2025-02-18 21:24:12 -05:00
07a55093cd Fix unsafe access to BufferDescriptors
When considering a local buffer, the GetBufferDescriptor() call in
BufferGetLSNAtomic() would be retrieving a shared buffer with a bad
buffer ID.  Since the code checks whether the buffer is shared before
using the retrieved BufferDesc, this issue did not lead to any
malfunction.  Nonetheless this seems like trouble waiting to happen,
so fix it by ensuring that GetBufferDescriptor() is only called when
we know the buffer is shared.

Author: Tender Wang <tndrwang@gmail.com>
Reviewed-by: Xuneng Zhou <xunengzhou@gmail.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/CAHewXNku-o46-9cmUgyv6LkSZ25doDrWq32p=oz9kfD8ovVJMg@mail.gmail.com
Backpatch-through: 13
2025-02-19 11:14:13 +09:00
691c32ba4b test_escape: Fix handling of short options in getopt_long()
This addresses two errors in the module, based on the set of options
supported:
- '-c', for --conninfo, was not listed.
- '-f', for --force-unsupported, was not listed.

While on it, these are now listed in an alphabetical order.

Author: Japin Li
Discussion: https://postgr.es/m/ME0P300MB04451FB20CE0346A59C25CADB6FA2@ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM
Backpatch-through: 13
2025-02-19 09:45:59 +09:00
e5cabe2800 Stamp 14.17. REL_14_17 2025-02-17 16:15:49 -05:00
50fabe48e3 Translation updates
Source-Git-URL: ssh://git@git.postgresql.org/pgtranslation/messages.git
Source-Git-Hash: 8fbdbed3216a3cdbb9ca74f3d9d990b4e96100e3
2025-02-17 17:51:30 +01:00
10b5dac700 Release notes for 17.4, 16.8, 15.12, 14.17, 13.20. 2025-02-16 14:20:33 -05:00
f864a4cdfc In fmtIdEnc(), handle failure of enlargePQExpBuffer().
Coverity complained that we weren't doing that, and it's right.

This fix just makes fmtIdEnc() honor the general convention that OOM
causes a PQExpBuffer to become marked "broken", without any immediate
error.  In the pretty-unlikely case that we actually did hit OOM here,
the end result would be to return an empty string to the caller,
probably resulting in invalid SQL syntax in an issued command (if
nothing else went wrong, which is even more unlikely).  It's tempting
to throw an "out of memory" error if the buffer becomes broken, but
there's not a lot of point in doing that only here and not in hundreds
of other PQExpBuffer-using places in pg_dump and similar callers.
The whole issue could do with some non-time-crunched redesign, perhaps.

This is a followup to the fixes for CVE-2025-1094, and should be
included if cherry-picking those fixes.
2025-02-16 12:46:35 -05:00
c08309584a Make escaping functions retain trailing bytes of an invalid character.
Instead of dropping the trailing byte(s) of an invalid or incomplete
multibyte character, replace only the first byte with a known-invalid
sequence, and process the rest normally.  This seems less likely to
confuse incautious callers than the behavior adopted in 5dc1e42b4.

While we're at it, adjust PQescapeStringInternal to produce at most
one bleat about invalid multibyte characters per string.  This
matches the behavior of PQescapeInternal, and avoids the risk of
producing tons of repetitive junk if a long string is simply given
in the wrong encoding.

This is a followup to the fixes for CVE-2025-1094, and should be
included if cherry-picking those fixes.

Author: Andres Freund <andres@anarazel.de>
Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>
Reported-by: Jeff Davis <pgsql@j-davis.com>
Discussion: https://postgr.es/m/20250215012712.45@rfd.leadboat.com
Backpatch-through: 13
2025-02-15 16:20:21 -05:00
985908df18 Fix PQescapeLiteral()/PQescapeIdentifier() length handling
In 5dc1e42b4f I fixed bugs in various escape functions, unfortunately as part
of that I introduced a new bug in PQescapeLiteral()/PQescapeIdentifier(). The
bug is that I made PQescapeInternal() just use strlen(), rather than taking
the specified input length into account.

That's bad, because it can lead to including input that wasn't intended to be
included (in case len is shorter than null termination of the string) and
because it can lead to reading invalid memory if the input string is not null
terminated.

Expand test_escape to this kind of bug:

a) for escape functions with length support, append data that should not be
   escaped and check that it is not

b) add valgrind requests to detect access of bytes that should not be touched

Author: Tom Lane <tgl@sss.pgh.pa.us>
Author: Andres Freund <andres@anarazel.de
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/Z64jD3u46gObCo1p@pryzbyj2023
Backpatch: 13
2025-02-14 18:09:25 -05:00
971566e965 Fix assertion on dereferenced object
Commit 27cc7cd2bc accidentally placed the assertion ensuring
that the pointer isn't NULL after it had already been accessed.
Fix by moving the pointer dereferencing to after the assertion.
Backpatch to all supported branches.

Author: Dmitry Koval <d.koval@postgrespro.ru>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/1618848d-cdc7-414b-9c03-08cf4bef4408@postgrespro.ru
Backpatch-through: 13
2025-02-14 11:50:56 +01:00
8e58f8024d Fix MakeTransitionCaptureState() to return a consistent result
When an UPDATE trigger referencing a new table and a DELETE trigger
referencing an old table are both present, MakeTransitionCaptureState()
returns an inconsistent result for UPDATE commands in its set of flags
and tuplestores holding the TransitionCaptureState for transition
tables.

As proved by the test added here, this issue causes a crash in v14 and
earlier versions (down to 11, actually, older versions do not support
triggers on partitioned tables) during cross-partition updates on a
partitioned table.  v15 and newer versions are safe thanks to
7103ebb7aa.

This commit fixes the function so that it returns a consistent state
by using portions of the changes made in commit 7103ebb7aa for v13 and
v14.  v15 and newer versions are slightly tweaked to match with the
older versions, mainly for consistency across branches.

Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20250207.150238.968446820828052276.horikyota.ntt@gmail.com
Backpatch-through: 13
2025-02-13 16:31:11 +09:00
80b23bf5b8 Doc: Fix punctuation errors
Author: 斉藤登 <noborusai@gmail.com>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Discussion: https://postgr.es/m/CAAM3qnL6i-BSu5rB2+KiHLjMCOXiQEiPMBvEj7F1CgUzZMooLA@mail.gmail.com
Backpatch-through: 13
2025-02-12 13:41:30 +07:00
6884202705 Stamp 14.16. REL_14_16 2025-02-10 18:26:37 -05:00
e748064669 Last-minute updates for release notes.
Security: CVE-2025-1094
2025-02-10 18:16:25 -05:00
e1f1b030d7 Adapt appendPsqlMetaConnect() to the new fmtId() encoding expectations.
We need to tell fmtId() what encoding to assume, but this function
doesn't know that.  Fortunately we can fix that without changing the
function's API, because we can just use SQL_ASCII.  That's because
database names in connection requests are effectively binary not text:
no encoding-aware processing will happen on them.

This fixes XversionUpgrade failures seen in the buildfarm.  The
alternative of having pg_upgrade use setFmtEncoding() is unappetizing,
given that it's connecting to multiple databases that may have
different encodings.

Andres Freund, Noah Misch, Tom Lane

Security: CVE-2025-1094
2025-02-10 16:31:01 -05:00
4a6825c762 Fix type in test_escape test
On machines where char is unsigned this could lead to option parsing looping
endlessly. It's also too narrow a type on other hardware.

Found via Tom Lane's monitoring of the buildfarm.

Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Security: CVE-2025-1094
Backpatch-through: 13
2025-02-10 12:12:59 -05:00
17b7894691 docs: EUC_TW can be up to four bytes wide, not three
Backpatch-through: 13
Security: CVE-2025-1094
2025-02-10 10:03:40 -05:00
46ddf98df0 Add test of various escape functions
As highlighted by the prior commit, writing correct escape functions is less
trivial than one might hope.

This test module tries to verify that different escaping functions behave
reasonably. It e.g. tests:

- Invalidly encoded input to an escape function leads to invalidly encoded
  output

- Trailing incomplete multi-byte characters are handled sensibly

- Escaped strings are parsed as single statement by psql's parser (which
  derives from the backend parser)

There are further tests that would be good to add. But even in the current
state it was rather useful for writing the fix in the prior commit.

Reviewed-by: Noah Misch <noah@leadboat.com>
Backpatch-through: 13
Security: CVE-2025-1094
2025-02-10 10:03:40 -05:00
e0ef3d776a Fix handling of invalidly encoded data in escaping functions
Previously invalidly encoded input to various escaping functions could lead to
the escaped string getting incorrectly parsed by psql.  To be safe, escaping
functions need to ensure that neither invalid nor incomplete multi-byte
characters can be used to "escape" from being quoted.

Functions which can report errors now return an error in more cases than
before. Functions that cannot report errors now replace invalid input bytes
with a byte sequence that cannot be used to escape the quotes and that is
guaranteed to error out when a query is sent to the server.

The following functions are fixed by this commit:
- PQescapeLiteral()
- PQescapeIdentifier()
- PQescapeString()
- PQescapeStringConn()
- fmtId()
- appendStringLiteral()

Reported-by: Stephen Fewer <stephen_fewer@rapid7.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Backpatch-through: 13
Security: CVE-2025-1094
2025-02-10 10:03:40 -05:00
2ae54ae17e Specify the encoding of input to fmtId()
This commit adds fmtIdEnc() and fmtQualifiedIdEnc(), which allow to specify
the encoding as an explicit argument.  Additionally setFmtEncoding() is
provided, which defines the encoding when no explicit encoding is provided, to
avoid breaking all code using fmtId().

All users of fmtId()/fmtQualifiedId() are either converted to the explicit
version or a call to setFmtEncoding() has been added.

This commit does not yet utilize the now well-defined encoding, that will
happen in a subsequent commit.

Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Backpatch-through: 13
Security: CVE-2025-1094
2025-02-10 10:03:40 -05:00
5bc33cbeae Add pg_encoding_set_invalid()
There are cases where we cannot / do not want to error out for invalidly
encoded input. In such cases it can be useful to replace e.g. an incomplete
multi-byte characters with bytes that will trigger an error when getting
validated as part of a larger string.

Unfortunately, until now, for some encoding no such sequence existed. For
those encodings this commit removes one previously accepted input combination
- we consider that to be ok, as the chosen bytes are outside of the valid
ranges for the encodings, we just previously failed to detect that.

As we cannot add a new field to pg_wchar_table without breaking ABI, this is
implemented "in-line" in the newly added function.

Author: Noah Misch <noah@leadboat.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Backpatch-through: 13
Security: CVE-2025-1094
2025-02-10 10:03:40 -05:00
04f31c8d0d Translation updates
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 7ebf0f8af5adf2da3fd838965b89fbfe5850f208
2025-02-10 15:16:57 +01:00
2e44eefb9d Release notes for 17.3, 16.7, 15.11, 14.16, 13.19. 2025-02-09 13:58:53 -05:00
5addea71c7 Fix pgbench performance issue induced by commit af35fe501.
Commit af35fe501 caused "pgbench -i" to emit a '\r' character
for each data row loaded (when stderr is a terminal).
That's effectively invisible on-screen, but it causes the
connected terminal program to consume a lot of cycles.
It's even worse if you're connected over ssh, as the data
then has to pass through the ssh tunnel.

Simplest fix is to move the added logic inside the if-tests
that check whether to print a progress line.  We could do
it another way that avoids duplicating these few lines,
but on the whole this seems the most transparent way to
write it.

Like the previous commit, back-patch to all supported versions.

Reported-by: Andres Freund <andres@anarazel.de>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/4k4drkh7bcmdezq6zbkhp25mnrzpswqi2o75d5uv2eeg3aq6q7@b7kqdmzzwzgb
Backpatch-through: 13
2025-02-07 13:41:43 -05:00
9f6ad2f43c Doc: clarify behavior of timestamptz input some more.
Try to make it absolutely plain that we don't retain the
originally specified time zone, only the UTC timestamp.

While at it, make glossary entries for "UTC" and "GMT".

Author: Robert Treat <rob@xzilla.net>
Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/173796426022.1064.9135167366862649513@wrigleys.postgresql.org
Backpatch-through: 13
2025-02-07 12:40:41 -05:00
ceeae76704 doc: Update links which returned 404
Two links in the isn module documentation were pointing to tools
which had been moved, resulting in 404 error responses.  Update
to the new URLs for the tools.  The link to the Sequoia 2000 page
in the history section was no longer working, and since the page
is no longer available online update our link to point at the
paper instead which is on a stable URL.

These links exist in all versions of the documentation so backpatch
to all supported branches.

Author: Daniel Gustafsson <daniel@yesql.se>
Reported-by: charukiewicz@protonmail.com
Discussion: https://postgr.es/m/173679670185.705.8565555804465055355@wrigleys.postgresql.org
Backpatch-through: 13
2025-02-05 13:58:40 +01:00
f2205448b1 pg_controldata: Fix possible errors on corrupted pg_control
Protect against malformed timestamps.  Also protect against negative WalSegSz
as it triggers division by zero:

((0x100000000UL) / (WalSegSz)) can turn into zero in

XLogFileName(xlogfilename, ControlFile->checkPointCopy.ThisTimeLineID,
             segno, WalSegSz);

because if WalSegSz is -1 then by arithmetic rules in C we get
0x100000000UL / 0xFFFFFFFFFFFFFFFFUL == 0.

Author: Ilyasov Ian <ianilyasov@outlook.com>
Author: Anton Voloshin <a.voloshin@postgrespro.ru>
Backpatch-through: 13
2025-02-05 00:45:16 +02:00
eeaf6891be vacuumdb: Add missing PQfinish() calls to vacuum_one_database().
A few of the version checks in vacuum_one_database() do not call
PQfinish() before exiting.  This precedent was unintentionally
established in commit 00d1e88d36, and while it's probably not too
problematic, it seems better to properly close the connection.

Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/Z6JAwqN1I8ljTuXp%40nathan
Backpatch-through: 13
2025-02-04 13:26:57 -06:00
d7260f92ad Mention jsonlog in description of logging_collector in GUC table
logging_collector was only mentioning stderr and csvlog, and forgot
about jsonlog.  Oversight in dc686681e0, that has added support for
jsonlog in log_destination.

While on it, the description in the GUC table is tweaked to be more
consistent with the documentation and postgresql.conf.sample.

Author: Umar Hayat
Reviewed-by: Ashutosh Bapat, Tom Lane
Discussion: https://postgr.es/m/CAD68Dp1K_vBYqBEukHw=1jF7e76t8aszGZTFL2ugi=H7r=a7MA@mail.gmail.com
Backpatch-through: 13
2025-02-02 11:31:32 +09:00
71832a189b Fix comment of StrategySyncStart()
The top comment of StrategySyncStart() mentions BufferSync(), but this
function calls BgBufferSync(), not BufferSync().

Oversight in 9cd00c457e.

Author: Ashutosh Bapat
Discussion: https://postgr.es/m/CAExHW5tgkjag8i-s=RFrCn5KAWDrC4zEPPkfUKczfccPOxBRQQ@mail.gmail.com
Backpatch-through: 13
2025-01-31 11:06:12 +09:00
6a33bb35c0 Avoid integer overflow while testing wal_skip_threshold condition.
smgrDoPendingSyncs had two distinct risks of integer overflow while
deciding which way to ensure durability of a newly-created relation.
First, it accumulated the total size of all forks in a variable of
type BlockNumber (uint32).  While we restrict an individual fork's
size to fit in that, I don't believe there's such a restriction on
all of them added together.  Second, it proceeded to multiply the
sum by BLCKSZ, which most certainly could overflow a uint32.

(The exact expression is total_blocks * BLCKSZ / 1024.  The
compiler might choose to optimize that to total_blocks * 8,
which is not at quite as much risk of overflow as a literal
reading would be, but it's still wrong.)

If an overflow did occur it could lead to a poor choice to
shove a very large relation into WAL instead of fsync'ing it.
This wouldn't be fatal, but it could be inefficient.

Change total_blocks to uint64 which should be plenty, and
rearrange the comparison calculation to be overflow-safe.

I noticed this while looking for ramifications of the proposed
change in MAX_KILOBYTES.  It's not entirely clear to me why
wal_skip_threshold is limited to MAX_KILOBYTES in the
first place, but in any case this code is unsafe regardless
of the range of wal_skip_threshold.

Oversight in c6b92041d which introduced wal_skip_threshold,
so back-patch to v13.

Discussion: https://postgr.es/m/1a01f0-66ec2d80-3b-68487680@27595217
Backpatch-through: 13
2025-01-30 15:36:07 -05:00
c05268e6ec Handle default NULL insertion a little better.
If a column is omitted in an INSERT, and there's no column default,
the code in preptlist.c generates a NULL Const to be inserted.
Furthermore, if the column is of a domain type, we wrap the Const
in CoerceToDomain, so as to throw a run-time error if the domain
has a NOT NULL constraint.  That's fine as far as it goes, but
there are two problems:

1. We're being sloppy about the type/typmod that the Const is
labeled with.  It really should have the domain's base type/typmod,
since it's the input to CoerceToDomain not the output.  This can
result in coerce_to_domain inserting a useless length-coercion
function (useless because it's being applied to a null).  The
coercion would typically get const-folded away later, but it'd
be better not to create it in the first place.

2. We're not applying expression preprocessing (specifically,
eval_const_expressions) to the resulting expression tree.
The planner's primary expression-preprocessing pass already happened,
so that means the length coercion step and CoerceToDomain node miss
preprocessing altogether.

This is at the least inefficient, since it means the length coercion
and CoerceToDomain will actually be executed for each inserted row,
though they could be const-folded away in most cases.  Worse, it
seems possible that missing preprocessing for the length coercion
could result in an invalid plan (for example, due to failing to
perform default-function-argument insertion).  I'm not aware of
any live bug of that sort with core datatypes, and it might be
unreachable for extension types as well because of restrictions of
CREATE CAST, but I'm not entirely convinced that it's unreachable.
Hence, it seems worth back-patching the fix (although I only went
back to v14, as the patch doesn't apply cleanly at all in v13).

There are several places in the rewriter that are building null
domain constants the same way as preptlist.c.  While those are
before the planner and hence don't have any reachable bug, they're
still applying a length coercion that will be const-folded away
later, uselessly wasting cycles.  Hence, make a utility routine
that all of these places can call to do it right.

Making this code more careful about the typmod assigned to the
generated NULL constant has visible but cosmetic effects on some
of the plans shown in contrib/postgres_fdw's regression tests.

Discussion: https://postgr.es/m/1865579.1738113656@sss.pgh.pa.us
Backpatch-through: 14
2025-01-29 15:31:55 -05:00
54f9afea7a Avoid breaking SJIS encoding while de-backslashing Windows paths.
When running on Windows, canonicalize_path() converts '\' to '/'
to prevent confusing the Windows command processor.  It was
doing that in a non-encoding-aware fashion; but in SJIS there
are valid two-byte characters whose second byte matches '\'.
So encoding corruption ensues if such a character is used in
the path.

We can fairly easily fix this if we know which encoding is
in use, but a lot of our utilities don't have much of a clue
about that.  After some discussion we decided we'd settle for
fixing this only in psql, and assuming that its value of
client_encoding matches what the user is typing.

It seems hopeless to get the server to deal with the problematic
characters in database path names, so we'll just declare that
case to be unsupported.  That means nothing need be done in
the server, nor in utility programs whose only contact with
file path names is for database paths.  But psql frequently
deals with client-side file paths, so it'd be good if it
didn't mess those up.

Bug: #18735
Reported-by: Koichi Suzuki <koichi.suzuki@enterprisedb.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Koichi Suzuki <koichi.suzuki@enterprisedb.com>
Discussion: https://postgr.es/m/18735-4acdb3998bb9f2b1@postgresql.org
Backpatch-through: 13
2025-01-29 14:24:36 -05:00
25e99483c4 At update of non-LP_NORMAL TID, fail instead of corrupting page header.
The right mix of DDL and VACUUM could corrupt a catalog page header such
that PageIsVerified() durably fails, requiring a restore from backup.
This affects only catalogs that both have a syscache and have DDL code
that uses syscache tuples to construct updates.  One of the test
permutations shows a variant not yet fixed.

This makes !TransactionIdIsValid(TM_FailureData.xmax) possible with
TM_Deleted.  I think core and PGXN are indifferent to that.

Per bug #17821 from Alexander Lakhin.  Back-patch to v13 (all supported
versions).  The test case is v17+, since it uses INJECTION_POINT.

Discussion: https://postgr.es/m/17821-dd8c334263399284@postgresql.org
2025-01-25 11:28:19 -08:00
7b3259bd7c Test ECPG decadd(), decdiv(), decmul(), and decsub() for risnull() input.
Since commit 757fb0e5a9, these
Informix-compat functions return 0 without changing the output
parameter.  Initialize the output parameter before the test call, making
that obvious.  Before this, the expected test output has been depending
on freed stack memory.  "gcc -ftrivial-auto-var-init=pattern" revealed
that.  Back-patch to v13 (all supported versions).

Discussion: https://postgr.es/m/20250106192748.cf.nmisch@google.com
2025-01-25 11:28:19 -08:00
4106942f00 Doc: recommend "psql -X" for restoring pg_dump scripts.
This practice avoids possible problems caused by non-default psql
options, such as disabling AUTOCOMMIT.

Author: Shinya Kato <Shinya11.Kato@oss.nttdata.com>
Reviewed-by: Robert Treat <rob@xzilla.net>
Discussion: https://postgr.es/m/96ff23a5d858ff72ca8e823a014d16fe@oss.nttdata.com
Backpatch-through: 13
2025-01-25 12:42:05 -05:00
fb60050f42 Use the correct sizeof() in BufFileLoadBuffer
The sizeof() call should reference buffer.data, because that's the
buffer we're reading data into, not the whole PGAlignedBuffer union.
This was introduced by 44cac93464, which replaced the simple buffer
with a PGAlignedBuffer field.

It's benign, because the buffer is the largest field of the union, so
the sizes are the same. But it's easy to trip over this in a patch, so
fix and backpatch. Commit 44cac93464 went into 12, but that's EOL.

Backpatch-through: 13
Discussion: https://postgr.es/m/928bdab1-6567-449f-98c4-339cd2203b87@vondra.me
2025-01-25 02:16:06 +01:00
63a4b9f767 Don't ask for bug reports about pthread_is_threaded_np() != 0.
We thought that this condition was unreachable in ExitPostmaster,
but actually it's possible if you have both a misconfigured locale
setting and some other mistake that causes PostmasterMain to bail
out before reaching its own check of pthread_is_threaded_np().

Given the lack of other reports, let's not ask for bug reports if
this occurs; instead just give the same hint as in PostmasterMain.

Bug: #18783
Reported-by: anani191181515@gmail.com
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/18783-d1873b95a59b9103@postgresql.org
Discussion: https://postgr.es/m/206317.1737656533@sss.pgh.pa.us
Backpatch-through: 13
2025-01-23 14:23:39 -05:00
3085993052 Repair incorrect handling of AfterTriggerSharedData.ats_modifiedcols.
This patch fixes two distinct errors that both ultimately trace
to commit 71d60e2aa, which added the ats_modifiedcols field.

The more severe error is that ats_modifiedcols wasn't accounted for
in afterTriggerAddEvent's scanning loop that looks for a pre-existing
duplicate AfterTriggerSharedData.  Thus, a new event could be
incorrectly matched to an AfterTriggerSharedData that has a different
value of ats_modifiedcols, resulting in the wrong tg_updatedcols
bitmap getting passed to the trigger whenever it finally gets fired.
We'd not noticed because (a) few triggers consult tg_updatedcols,
and (b) we had no tests exercising a case where such a trigger was
called as an AFTER trigger.  In the test case added by this commit,
contrib/lo's trigger fails to remove a large object when expected
because (without this fix) it thinks the LO OID column hasn't changed.

The other problem was introduced by commit ce5aaea8c, which copied the
modified-columns bitmap into trigger-related storage.  It made a copy
for every trigger event, whereas what we really want is to make a new
copy only when we make a new AfterTriggerSharedData entry.  (We could
imagine adding extra logic to reduce the number of bitmap copies still
more, but it doesn't look worthwhile at the moment.)  In a simple test
of an UPDATE of 10000000 rows with a single AFTER trigger, this thinko
roughly tripled the amount of memory consumed by the pending-triggers
data structures, from 160446744 to 480443440 bytes.

Fixing the first problem requires introducing a bms_equal() call into
afterTriggerAddEvent's scanning loop, which is slightly annoying from
a speed perspective.  However, getting rid of the excessive bms_copy()
calls from the second problem balances that out; overall speed of
trigger operations is the same or slightly better, in my tests.

Discussion: https://postgr.es/m/3496294.1737501591@sss.pgh.pa.us
Backpatch-through: 13
2025-01-22 11:58:20 -05:00
20b4819d0e Update time zone data files to tzdata release 2025a.
DST law changes in Paraguay.
Historical corrections for the Philippines.

Backpatch-through: 13
2025-01-20 16:49:15 -05:00
aac0384975 Avoid using timezone Asia/Manila in regression tests.
The freshly-released 2025a version of tzdata has a refined estimate
for the longitude of Manila, changing their value for LMT in
pre-standardized-timezone days.  This changes the output of one of
our test cases.  Since we need to be able to run with system tzdata
files that may or may not contain this update, we'd better stop
making that specific test.

I switched it to use Asia/Singapore, which has a roughly similar UTC
offset.  That LMT value hasn't changed in tzdb since 2003, so we can
hope that it's well established.

I also noticed that this set of make_timestamptz tests only exercises
zones east of Greenwich, which seems rather sad, and was not the
original intent AFAICS.  (We've already changed these tests once
to stabilize their results across tzdata updates, cf 66b737cd9;
it looks like I failed to consider the UTC-offset-sign aspect then.)
To improve that, add a test with Pacific/Honolulu.  That LMT offset
is also quite old in tzdb, so we'll cross our fingers that it doesn't
get improved.

Reported-by: Christoph Berg <cb@df7cb.de>
Discussion: https://postgr.es/m/Z46inkznCxesvDEb@msg.df7cb.de
Backpatch-through: 13
2025-01-20 15:47:53 -05:00