If a domain type has a default, adding a column of that type (without
any explicit DEFAULT clause) failed to install the domain's default
value in existing rows, instead leaving the new column null. This
is unexpected, and it used to work correctly before v11. The cause
is confusion in the atthasmissing mechanism about which default value
to install: we'd only consider installing an explicitly-specified
default, and then we'd decide that no table rewrite is needed.
To fix, take the responsibility for filling attmissingval out of
StoreAttrDefault, and instead put it into ATExecAddColumn's existing
logic that derives the correct value to fill the new column with.
Also, centralize the logic that determines the need for
default-related table rewriting there, instead of spreading it over
four or five places.
In the back branches, we'll leave the attmissingval-filling code
in StoreAttrDefault even though it's now dead, for fear that some
extension may be depending on that functionality to exist there.
A separate HEAD-only patch will clean up the now-useless code.
Reported-by: jian he <jian.universality@gmail.com>
Author: jian he <jian.universality@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CACJufxHFssPvkP1we7WMhPD_1kwgbG52o=kQgL+TnVoX5LOyCQ@mail.gmail.com
Backpatch-through: 13
We had managed not to notice this simple oversight because none
of our calls exercised the case --- until commit 8f427187d.
That led to pg_dump crashing on any platform that uses this code
(currently Cygwin and Mingw).
Even though there's no immediate bug in the back branches, backpatch,
because a non-POSIX-compliant strtof() substitute is trouble waiting
to happen for extensions or future back-patches.
Diagnosed-by: Alexander Lakhin <exclusion@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/339b3902-4e98-4e31-a744-94e43b7b9292@gmail.com
Backpatch-through: 13
The function in charge of freeing the memory from a result created by
PQescapeIdentifier() has to be PQfreemem(), to ensure that both
allocation and free come from libpq.
One spot in pg_upgrade was not respecting that for pg_database's
datlocale (daticulocale in v16) when the collation provider is libc (aka
datlocale/daticulocale is NULL) with an allocation done using
pg_strdup() and a free with PQfreemem(). The code is changed to always
use PQescapeLiteral() when processing the input.
Oversight in 9637badd9f92. This commit is similar to 48e4ae9a0707 and
5b94e2753439.
Author: Michael Paquier <michael@paquier.xyz>
Co-authored-by: Ranier Vilela <ranier.vf@gmail.com>
Discussion: https://postgr.es/m/Z601RQxTmIUohdkV@paquier.xyz
Backpatch-through: 16
This commit documents that the failover option is not copied when using
the pg_copy_logical_replication_slot function.
In passing, we modify the comments in the function clarifying the reason
for this behavior.
Reported-by: <duffieldzane@gmail.com>
Author: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 17, where it was introduced
Discussion: https://postgr.es/m/173976850802.682632.11315364077431550250@wrigleys.postgresql.org
When a standby replays an XLOG_PARAMETER_CHANGE record that lowers
wal_level below logical, we invalidate all logical slots in hot
standby mode. However, if this record was replayed while not in hot
standby mode, logical slots could remain valid even after promotion,
potentially causing an assertion failure during WAL record decoding.
To fix this issue, this commit adds a check for hot_standby status
when restoring a logical replication slot on standbys. This check
ensures that logical slots are invalidated when they become
incompatible due to insufficient wal_level during recovery.
Backpatch to v16 where logical decoding on standby was introduced.
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/CAD21AoABoFwGY_Rh2aeE6tEq3HkJxf0c6UeOXn4VV9v6BAQPSw%40mail.gmail.com
Backpatch-through: 16
Due to misunderstanding on my part, commit 235328ee4 did not go far
enough to silence older versions of Valgrind. For those, it was the bit
scan that was problematic, not the subsequent bit-masking operation. To
fix, use the unaligned path for the trailing bytes. Since we don't have
a bit scan here anymore, also remove some comments and endian-specific
coding around that.
Reported-by: Anton A. Melnikov <a.melnikov@postgrespro.ru>
Discussion: https://postgr.es/m/f3aa2d45-3b28-41c5-9499-a1bc30e0f8ec@postgrespro.ru
Backpatch-through: 17
There is a race condition between "GRANT role" and "DROP ROLE",
which allows GRANT to install pg_auth_members entries that refer to
dropped roles. (Commit 6566133c5 prevented that for the grantor
field, but not for the granted or grantee roles.) We'll soon fix
that, at least in HEAD, but pg_dumpall needs to cope with the
situation in case of pre-existing inconsistency. As pg_dumpall
stands, it will emit invalid commands like 'GRANT foo TO ""',
which causes pg_upgrade to fail. Fix it to emit warnings and skip
those GRANTs, instead.
There was some discussion of removing the problem by changing
dumpRoleMembership's query to use JOIN not LEFT JOIN, but that
would result in silently ignoring such entries. It seems better
to produce a warning.
Pre-v16 branches already coped with dangling grantor OIDs by simply
omitting the GRANTED BY clause. I left that behavior as-is, although
it's somewhat inconsistent with the behavior of later branches.
Reported-by: Virender Singla <virender.cse@gmail.com>
Discussion: https://postgr.es/m/CAM6Zo8woa62ZFHtMKox6a4jb8qQ=w87R2L0K8347iE-juQL2EA@mail.gmail.com
Backpatch-through: 13
Previously the portlock logic, added in 9b4eafcaf41, didn't actually work
properly when the tests were run via meson. 9b4eafcaf41 used the
MESON_BUILD_ROOT environment variable to determine the directory for the port
lock directory, but that's never set for running the tests. That meant that
each test used its own portlock dir, unless the PG_TEST_PORT_DIR environment
variable was set.
Fix the problem by setting top_builddir for the environment. That's also used
for the autoconf/make build.
Backpatch back to 16, where meson support was added.
Reported-by: Zharkov Roman <r.zharkov@postgrespro.ru>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Backpatch-through: 16
Dumps from versions older than v16 do not know about NO INDENT in a
XMLSERIALIZE() clause. This commit adjusts AdjustUpgrade.pm so as NO
INDENT is discarded in the contents of the new dump adjusted for
comparison when the old version is v15 or older. This should be enough
to make the cross-version upgrade tests pass.
Per report from buildfarm member crake. Oversight in 984410b92326.
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/88b183f1-ebf9-4f51-9144-3704380ccae7@dunslane.net
Backpatch-through: 16
Previously, a WARNING was issued at the time of defining a subscription
with origin=NONE only when the publisher subscribed to the same table from
other publishers, indicating potential data origination from different
origins. However, the publisher can subscribe to the partition ancestors
or partition children of the table from other publishers, which could also
result in mixed-origin data inclusion. So, give a WARNING in those cases
as well.
Reported-by: Sergey Tatarintsev <s.tatarintsev@postgrespro.ru>
Author: Hou Zhijie <houzj.fnst@fujitsu.com>
Author: Shlok Kyal <shlok.kyal.oss@gmail.com>
Reviewed-by: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 16, where it was introduced
Discussion: https://postgr.es/m/5eda6a9c-63cf-404d-8a49-8dcb116a29f3@postgrespro.ru
If the requested recovery timeline is not reachable, the logged
checkpoint and timeline should to be the values read from the
backup_label when it is defined. The message generated used the values
from the control file in this case, which is fine when recovering from
the control file without a backup_label, but not if there is a
backup_label.
Issue introduced in ee994272ca50. v15 has introduced xlogrecovery.c and
more simplifications in this area (4a92a1c3d1c3, a27048cbcb58), making
this change a bit simpler to think about, so backpatch only down to this
version.
Author: David Steele <david@pgbackrest.org>
Reviewed-by: Andrey M. Borodin <x4mmm@yandex-team.ru>
Reviewed-by: Benoit Lobréau <benoit.lobreau@dalibo.com>
Discussion: https://postgr.es/m/c3d617d4-1696-4aa7-8a4d-5a7d19cc5618@pgbackrest.org
Backpatch-through: 15
Since commit 29cf61ade3, table_block_relation_estimate_size() considers
fillfactor when estimating number of rows in a relation before the first
ANALYZE. The formula however did not consider tuples may be larger than
available space determined by fillfactor, ending with density 0. This
ultimately means the relation was estimated to contain a single row.
The executor however places at least one tuple per page, even with very
low fillfactor values, so the density should be at least 1. Fixed by
clamping the density estimate using clamp_row_est().
Reported by Heikki Linnakangas. Fix by me, with regression test inspired
by example provided by Heikki.
Backpatch to 17, where the issue was introduced.
Reported-by: Heikki Linnakangas
Backpatch-through: 17
Discussion: https://postgr.es/m/2bf9d973-7789-4937-a7ca-0af9fb49c71e@iki.fi
Logical replication crashes if the subscriber's partitioned table
has a BRIN index. There are two independently blamable causes,
and this patch fixes both:
1. brininsertcleanup fails if called twice for the same IndexInfo,
because it half-destroys its BrinInsertState but leaves it still
linked from ii_AmCache. brininsert would also fail in that state,
so it's pretty hard to see any advantage to this coding. Fully
remove the BrinInsertState, instead, so that a new brininsert
call would create a new cache.
2. A logical replication subscriber sometimes does ExecOpenIndices
twice on the same ResultRelInfo, followed by doing ExecCloseIndices
twice; the second call reaches the brininsertcleanup bug. Quite
aside from tickling unexpected cases in aminsertcleanup methods,
this seems very wasteful, because the IndexInfos built in the
first ExecOpenIndices call are just lost during the second call,
and have to be rebuilt at possibly-nontrivial cost. We should
establish a coding rule that you don't do that.
The problematic coding is that when the target table is partitioned,
apply_handle_tuple_routing calls ExecFindPartition which does
ExecOpenIndices (and expects that ExecCleanupTupleRouting will
close the indexes again). Using the ResultRelInfo made by
ExecFindPartition, it calls apply_handle_delete_internal or
apply_handle_insert_internal, both of which think they need to do
ExecOpenIndices/ExecCloseIndices for themselves. They do in the main
non-partitioned code paths, but not here. The simplest fix is to pull
their ExecOpenIndices/ExecCloseIndices calls out and put them in the
call sites for the non-partitioned cases. (We could have refactored
apply_handle_update_internal similarly, but I did not do so today
because there's no bug there: the partitioned code path doesn't
call it.)
Also, remove the always-duplicative open/close calls within
apply_handle_tuple_routing itself.
Since brininsertcleanup and indeed the whole aminsertcleanup mechanism
are new in v17, there's no observable bug in older branches. A case
could be made for trying to avoid these duplicative open/close calls
in the older branches, but for now it seems not worth the trouble and
risk of new bugs.
Bug: #18815
Reported-by: Sergey Belyashov <sergey.belyashov@gmail.com>
Discussion: https://postgr.es/m/18815-2a0407cc7f40b327@postgresql.org
Backpatch-through: 17
This addresses various corner cases in BackgroundPsql:
- On windows stdout and stderr may arrive out of order, leading to errors not
being reported, or attributed to the wrong statement.
To fix, emit the "query-separation banner" on both stdout and stderr and
wait for both.
- Very occasionally the "query-separation banner" would not get removed, because
we waited until the banner arrived, but then replaced the banner plus
newline.
To fix, wait for banner and newline.
- For interactive psql replacing $banner\n is not sufficient, interactive psql
outputs \r\n.
- For interactive psql, where commands are echoed to stdout, the \echo
command, rather than its output, would be matched.
This would sometimes lead to output from the prior query, or wait_connect(),
being returned in the next command.
This also affected wait_connect(), leading to sometimes sending queries to
psql before the connection actually was established.
While debugging these issues I also found that it's hard to know whether a
query separation banner was attributed to the right query. Make that easier by
counting the queries each BackgroundPsql instance has emitted and include the
number in the banner.
Also emit psql stdout/stderr in query() and wait_connect() as Test::More
notes, without that it's rather hard to debug some issues in CI and buildfarm.
As this can cause issues not just to-be-added tests, but also existing ones,
backpatch the fix to all supported versions.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/wmovm6xcbwh7twdtymxuboaoarbvwj2haasd3sikzlb3dkgz76@n45rzycluzft
Backpatch-through: 13
This is a backport of ba08edb0654. Originally it was only applied to master,
but I (Andres) am planning to fix a few bugs in BackgroundPsql, which would be
somewhat harder with the behavioural differences across branches. It's also
generally good for test infrastructure to behave similarly across branches, to
avoid pain during backpatching.
Discussion: https://postgr.es/m/ilcctzb5ju2gulvnadjmhgatnkxsdpac652byb2u3d3wqziyvx@fbuqcglker46
Michael's original commit message:
This commit extends the constructor routine of BackgroundPsql.pm with a
new "wait" parameter. If set to 0, the routine returns without waiting
for psql to start, ready to consume input.
background_psql() in Cluster.pm gains the same "wait" parameter. The
default behavior is still to wait for psql to start. It becomes now
possible to not wait, giving to TAP scripts the possibility to perform
actions between a BackgroundPsql startup and its wait_connect() call.
Author: Jacob Champion
Discussion: https://postgr.es/m/CAOYmi+=60deN20WDyCoHCiecgivJxr=98s7s7-C8SkXwrCfHXg@mail.gmail.com
This is a backport of 70291a3c66e. Originally it was only applied to master,
but I (Andres) am planning to fix a few bugs in BackgroundPsql that are harder
to fix in the backbranches with the old behavior. It's also generally good for
test infrastructure to behave similarly across branches, to avoid pain during
backpatching. 70291a3c66e changes the behavior in some cases, but after
discussing it, we are ok with that, it seems unlikely that there are
out-of-core tests relying on the prior behavior.
Discussion: https://postgr.es/m/ilcctzb5ju2gulvnadjmhgatnkxsdpac652byb2u3d3wqziyvx@fbuqcglker46
Michael's original commit message:
A newline is not added at the end of an empty query result, causing the
banner of the hardcoded \echo to not be discarded. This would reflect
on scripts that expect an empty result by showing the "QUERY_SEPARATOR"
in the output returned back to the caller, which was confusing.
This commit changes BackgroundPsql::query() so as empty results are able
to work correctly, making the first newline before the banner optional,
bringing more flexibility.
Note that this change affects 037_invalid_database.pl, where three
queries generated an empty result, with the script relying on the data
from the hardcoded banner to exist in the expected output. These
queries are changed to use query_safe(), leading to a simpler script.
The author has also proposed a test in a different patch where empty
results would exist when using BackgroundPsql.
Author: Jacob Champion
Reviewed-by: Andrew Dunstan, Michael Paquier
Discussion: https://postgr.es/m/CAOYmi+=60deN20WDyCoHCiecgivJxr=98s7s7-C8SkXwrCfHXg@mail.gmail.com
Absorb upstream bug fix (their commit
e322673a841d9abd69994ae8cd20e191090b6ef4), which prevents a null
pointer dereference crash if SN_create_env() gets a malloc failure
at just the wrong point.
Thanks to Maksim Korotkov for discovering the null-pointer
bug and submitting the fix to upstream snowball.
Reported-by: Maksim Korotkov <m.korotkov@postgrespro.ru>
Author: Maksim Korotkov <m.korotkov@postgrespro.ru>
Discussion: https://postgr.es/m/1d1a46-67ab1000-21-80c451@83151435
Backpatch-through: 13
When considering a local buffer, the GetBufferDescriptor() call in
BufferGetLSNAtomic() would be retrieving a shared buffer with a bad
buffer ID. Since the code checks whether the buffer is shared before
using the retrieved BufferDesc, this issue did not lead to any
malfunction. Nonetheless this seems like trouble waiting to happen,
so fix it by ensuring that GetBufferDescriptor() is only called when
we know the buffer is shared.
Author: Tender Wang <tndrwang@gmail.com>
Reviewed-by: Xuneng Zhou <xunengzhou@gmail.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/CAHewXNku-o46-9cmUgyv6LkSZ25doDrWq32p=oz9kfD8ovVJMg@mail.gmail.com
Backpatch-through: 13
In try_partitionwise_join, we try to break down the join between two
partitioned relations into joins between matching partitions. To
achieve this, we iterate through each pair of partitions from the two
joining relations and create child join relations for them. To reduce
memory accumulation during each iteration, one step we take is freeing
the SpecialJoinInfos created for the child joins.
A child join's SpecialJoinInfo is a copy of the parent join's
SpecialJoinInfo, with some members being translated copies of their
counterparts in the parent. However, when freeing the bitmapset
members in a child join's SpecialJoinInfo, we failed to check whether
they were translated copies. As a result, we inadvertently freed the
members that were still in use by the parent SpecialJoinInfo, leading
to crashes when those freed members were accessed.
To fix, check if each member of the child join's SpecialJoinInfo is a
translated copy and free it only if that's the case. This requires
passing the parent join's SpecialJoinInfo as a parameter to
free_child_join_sjinfo.
Back-patch to v17 where this bug crept in.
Bug: #18806
Reported-by: 孟令彬 <m_lingbin@126.com>
Diagnosed-by: Tender Wang <tndrwang@gmail.com>
Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://postgr.es/m/18806-d70b0c9fdf63dcbf@postgresql.org
Backpatch-through: 17
Coverity complained that we weren't doing that, and it's right.
This fix just makes fmtIdEnc() honor the general convention that OOM
causes a PQExpBuffer to become marked "broken", without any immediate
error. In the pretty-unlikely case that we actually did hit OOM here,
the end result would be to return an empty string to the caller,
probably resulting in invalid SQL syntax in an issued command (if
nothing else went wrong, which is even more unlikely). It's tempting
to throw an "out of memory" error if the buffer becomes broken, but
there's not a lot of point in doing that only here and not in hundreds
of other PQExpBuffer-using places in pg_dump and similar callers.
The whole issue could do with some non-time-crunched redesign, perhaps.
This is a followup to the fixes for CVE-2025-1094, and should be
included if cherry-picking those fixes.
Instead of dropping the trailing byte(s) of an invalid or incomplete
multibyte character, replace only the first byte with a known-invalid
sequence, and process the rest normally. This seems less likely to
confuse incautious callers than the behavior adopted in 5dc1e42b4.
While we're at it, adjust PQescapeStringInternal to produce at most
one bleat about invalid multibyte characters per string. This
matches the behavior of PQescapeInternal, and avoids the risk of
producing tons of repetitive junk if a long string is simply given
in the wrong encoding.
This is a followup to the fixes for CVE-2025-1094, and should be
included if cherry-picking those fixes.
Author: Andres Freund <andres@anarazel.de>
Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>
Reported-by: Jeff Davis <pgsql@j-davis.com>
Discussion: https://postgr.es/m/20250215012712.45@rfd.leadboat.com
Backpatch-through: 13
In 5dc1e42b4fa I fixed bugs in various escape functions, unfortunately as part
of that I introduced a new bug in PQescapeLiteral()/PQescapeIdentifier(). The
bug is that I made PQescapeInternal() just use strlen(), rather than taking
the specified input length into account.
That's bad, because it can lead to including input that wasn't intended to be
included (in case len is shorter than null termination of the string) and
because it can lead to reading invalid memory if the input string is not null
terminated.
Expand test_escape to this kind of bug:
a) for escape functions with length support, append data that should not be
escaped and check that it is not
b) add valgrind requests to detect access of bytes that should not be touched
Author: Tom Lane <tgl@sss.pgh.pa.us>
Author: Andres Freund <andres@anarazel.de
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/Z64jD3u46gObCo1p@pryzbyj2023
Backpatch: 13
Commit 27cc7cd2bc8a accidentally placed the assertion ensuring
that the pointer isn't NULL after it had already been accessed.
Fix by moving the pointer dereferencing to after the assertion.
Backpatch to all supported branches.
Author: Dmitry Koval <d.koval@postgrespro.ru>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/1618848d-cdc7-414b-9c03-08cf4bef4408@postgrespro.ru
Backpatch-through: 13
When an UPDATE trigger referencing a new table and a DELETE trigger
referencing an old table are both present, MakeTransitionCaptureState()
returns an inconsistent result for UPDATE commands in its set of flags
and tuplestores holding the TransitionCaptureState for transition
tables.
As proved by the test added here, this issue causes a crash in v14 and
earlier versions (down to 11, actually, older versions do not support
triggers on partitioned tables) during cross-partition updates on a
partitioned table. v15 and newer versions are safe thanks to
7103ebb7aae8.
This commit fixes the function so that it returns a consistent state
by using portions of the changes made in commit 7103ebb7aae8 for v13 and
v14. v15 and newer versions are slightly tweaked to match with the
older versions, mainly for consistency across branches.
Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20250207.150238.968446820828052276.horikyota.ntt@gmail.com
Backpatch-through: 13
The phase of the test waiting for a restartpoint to complete was not
working as intended, due to a log_contains() call incorrectly
written.
The problem reported by the author could be simply reproduced by
removing the injection_points_wakeup() call: the test succeeds rather
than waiting for the restartpoint completion. In most cases, the
restartpoint completion is fast enough that the test offered the wanted
coverage. On slow machines, it could have become unreliable.
Oversight in 6782709df81f.
Author: Nitin Jadhav
Discussion: https://postgr.es/m/CAMm1aWa_6u+o52r7h7G6pX-oWD0Qraf0ee17Ma50qxGS0B_Rzg@mail.gmail.com
Backpatch-through: 17
The correct function documented to free the memory allocated for the
result returned by PQescapeIdentifier() and PQescapeLiteral() is
PQfreemem(). pg_createsubscriber.c relied on pg_free() instead, which
is not incorrect as both do a free() internally, but inconsistent with
the documentation.
While on it, this commit fixes a small memory leak introduced by
4867f8a555ce, as the code of pg_createsubscriber makes this effort.
Author: Ranier Vilela
Reviewed-by: Euler Taveira
Discussion: https://postgr.es/m/CAEudQAp=AW5dJXrGLbC_aZg_9nOo=42W7uLDRONFQE-gcgnkgQ@mail.gmail.com
Backpatch-through: 17
We need to tell fmtId() what encoding to assume, but this function
doesn't know that. Fortunately we can fix that without changing the
function's API, because we can just use SQL_ASCII. That's because
database names in connection requests are effectively binary not text:
no encoding-aware processing will happen on them.
This fixes XversionUpgrade failures seen in the buildfarm. The
alternative of having pg_upgrade use setFmtEncoding() is unappetizing,
given that it's connecting to multiple databases that may have
different encodings.
Andres Freund, Noah Misch, Tom Lane
Security: CVE-2025-1094
On machines where char is unsigned this could lead to option parsing looping
endlessly. It's also too narrow a type on other hardware.
Found via Tom Lane's monitoring of the buildfarm.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Security: CVE-2025-1094
Backpatch-through: 13
As highlighted by the prior commit, writing correct escape functions is less
trivial than one might hope.
This test module tries to verify that different escaping functions behave
reasonably. It e.g. tests:
- Invalidly encoded input to an escape function leads to invalidly encoded
output
- Trailing incomplete multi-byte characters are handled sensibly
- Escaped strings are parsed as single statement by psql's parser (which
derives from the backend parser)
There are further tests that would be good to add. But even in the current
state it was rather useful for writing the fix in the prior commit.
Reviewed-by: Noah Misch <noah@leadboat.com>
Backpatch-through: 13
Security: CVE-2025-1094
Previously invalidly encoded input to various escaping functions could lead to
the escaped string getting incorrectly parsed by psql. To be safe, escaping
functions need to ensure that neither invalid nor incomplete multi-byte
characters can be used to "escape" from being quoted.
Functions which can report errors now return an error in more cases than
before. Functions that cannot report errors now replace invalid input bytes
with a byte sequence that cannot be used to escape the quotes and that is
guaranteed to error out when a query is sent to the server.
The following functions are fixed by this commit:
- PQescapeLiteral()
- PQescapeIdentifier()
- PQescapeString()
- PQescapeStringConn()
- fmtId()
- appendStringLiteral()
Reported-by: Stephen Fewer <stephen_fewer@rapid7.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Backpatch-through: 13
Security: CVE-2025-1094
This commit adds fmtIdEnc() and fmtQualifiedIdEnc(), which allow to specify
the encoding as an explicit argument. Additionally setFmtEncoding() is
provided, which defines the encoding when no explicit encoding is provided, to
avoid breaking all code using fmtId().
All users of fmtId()/fmtQualifiedId() are either converted to the explicit
version or a call to setFmtEncoding() has been added.
This commit does not yet utilize the now well-defined encoding, that will
happen in a subsequent commit.
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Backpatch-through: 13
Security: CVE-2025-1094
There are cases where we cannot / do not want to error out for invalidly
encoded input. In such cases it can be useful to replace e.g. an incomplete
multi-byte characters with bytes that will trigger an error when getting
validated as part of a larger string.
Unfortunately, until now, for some encoding no such sequence existed. For
those encodings this commit removes one previously accepted input combination
- we consider that to be ok, as the chosen bytes are outside of the valid
ranges for the encodings, we just previously failed to detect that.
As we cannot add a new field to pg_wchar_table without breaking ABI, this is
implemented "in-line" in the newly added function.
Author: Noah Misch <noah@leadboat.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Backpatch-through: 13
Security: CVE-2025-1094