Postgres 14 commit 5b861baa taught nbtree VACUUM to tolerate buggy
opclasses. VACUUM's inability to locate a to-be-deleted page's downlink
in the parent page was logged instead of throwing an error. VACUUM
could just press on with vacuuming the index, and vacuuming the table as
a whole.
There are now anecdotal reports of this error causing problems that were
much more disruptive than the underlying index corruption ever could be.
Anything that makes VACUUM unable to make forward progress against one
table/index ultimately risks making the system enter xidStopLimit mode.
There is no good reason to take any chances here, so backpatch the
hardening commit.
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wzm9HR6Pow=t-iQa57zT8qmX6_M4h14F-pTtb=xFDW5FBA@mail.gmail.com
Backpatch: 10-13 (all supported versions that lacked the hardening)
Improve documentation regarding the limitations of unique and primary key
constraints on partitioned tables. The existing documentation didn't make
it clear that the constraint columns had to be present in the partition
key as bare columns. The reader could be led to believe that it was ok to
include the constraint columns as part of a function call's parameters or
as part of an expression. Additionally, the documentation didn't mention
anything about the fact that we disallow unique and primary key
constraints if the partition keys contain *any* function calls or
expressions, regardless of if the constraint columns appear as columns
elsewhere in the partition key.
The confusion here was highlighted by a report on the general mailing list
by James Vanns.
Discussion: https://postgr.es/m/CAH7vdhNF0EdYZz3GLpgE3RSJLwWLhEk7A_fiKS9dPBT3Dz_3eA@mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvoU-u9iTqKjteYRFfi+UNEk7dbSAcyxEQD==vZt9B1KnA@mail.gmail.com
Reviewed-by: Erik Rijkers
Backpatch-through: 11
These have been updated by the revert done in 2f2b18b, but the
pre-revert state was correct. Note that the result was incorrectly
formatted in the first case.
Author: Erik Rijkers
Discussion: https://postgr.es/m/13777e96-24b6-396b-cb16-8ad01b6ac130@xs4all.nl
Backpatch-through: 13
Commits 3f7323cbb et al missed the possibility that the Params
they are looking for could be buried under implicit coercions,
as well as other stuff that processIndirection() could add to
the original targetlist entry. Copy the code in ruleutils.c
that deals with such cases. (I thought about refactoring so
that there's just one copy; but seeing that we only need this
in old back branches, it seems not worth the trouble.)
Per off-list report from Andre Lin. As before, only v10-v13
need the patch.
Discussion: https://postgr.es/m/17596-c5357f61427a81dc@postgresql.org
Primarily, this fixes an incorrect calculation in SlabCheck which was
looking in the wrong byte for the sentinel check. The reason that we've
never noticed this before in the form of a failing sentinel check is
because the pre-check to this always fails because all current core users
of slab contexts have a chunk size which is already MAXALIGNed, therefore
there's never any space for the sentinel byte. It is possible that an
extension needs to use a slab context and if they do with a chunk size
that's not MAXALIGNed, then they'll likely get errors about overwritten
sentinel bytes.
Additionally, this patch changes various calculations which are being done
based on the sizeof(SlabBlock). Currently, sizeof(SlabBlock) is a
multiple of 8, therefore sizeof(SlabBlock) is the same as
MAXALIGN(sizeof(SlabBlock)), however, if we were to ever have to add any
fields to that struct as part of a bug fix, then SlabAlloc could end up
returning a non-MAXALIGNed pointer. To be safe, let's ensure we always
MAXALIGN sizeof(SlabBlock) before using it in any calculations.
This patch has already been applied to master in d5ee4db0e.
Diagnosed-by: Tomas Vondra, Tom Lane
Author: Tomas Vondra, David Rowley
Discussion: https://postgr.es/m/CAA4eK1%2B1JyW5TiL%3DyV-3Uq1CrfnTyn0Xrk5uArt31Z%3D8rgPhXQ%40mail.gmail.com
Backpatch-through: 10
get_database_list() failed to restore the caller's memory context,
instead leaving current context set to TopMemoryContext which is
how CommitTransactionCommand() leaves it. The callers both think
they are using short-lived contexts, for the express purpose of
not having to worry about cleaning up individual allocations.
The net effect therefore is that supposedly short-lived allocations
could accumulate indefinitely in the launcher's TopMemoryContext.
Although this has been broken for a long time, it seems we didn't
have any obvious memory leak here until v15's rearrangement of the
stats logic. I (tgl) am not entirely convinced that there's no
other leak at all, though, and we're surely at risk of adding one
in future back-patched fixes. So back-patch to all supported
branches, even though this may be only a latent bug in pre-v15.
Reid Thompson
Discussion: https://postgr.es/m/972a4e12b68b0f96db514777a150ceef7dcd2e0f.camel@crunchydata.com
If the input word exceeds 1000 bytes, don't pass it to the stemmer;
just return it as-is after case folding. Such an input is surely
not a word in any human language, so whatever the stemmer might
do to it would be pretty dubious in the first place. Adding this
restriction protects us against a known recursion-to-stack-overflow
problem in the Turkish stemmer, and it seems like good insurance
against any other safety or performance issues that may exist in
the Snowball stemmers. (I note, for example, that they contain no
CHECK_FOR_INTERRUPTS calls, so we really don't want them running
for a long time.) The threshold of 1000 bytes is arbitrary.
An alternative definition could have been to treat such words as
stopwords, but that seems like a bigger break from the old behavior.
Per report from Egor Chindyaskin and Alexander Lakhin.
Thanks to Olly Betts for the recommendation to fix it this way.
Discussion: https://postgr.es/m/1661334672.728714027@f473.i.mail.ru
The default of lazy symbol resolution means that when the postmaster
first reaches the select() call in ServerLoop, it'll need to resolve
the link to that libc entry point. NetBSD's dynamic loader takes
an internal lock while doing that, and if a signal interrupts the
operation then there is a risk of self-deadlock should the signal
handler do anything that requires that lock, as several of the
postmaster signal handlers do. The window for this is pretty narrow,
and timing considerations make it unlikely that a signal would arrive
right then anyway. But it's semi-repeatable on slow single-CPU
machines, and in principle the race could happen with any hardware.
The least messy solution to this is to force binding of dynamic
symbols at postmaster start, using the "-z now" linker option.
While we're at it, also use "-z relro" so as to provide a small
security gain.
It's not entirely clear whether any other platforms share this
issue, but for now we'll assume it's NetBSD-specific. (We might
later try to use "-z now" on more platforms for performance
reasons, but that would not likely be something to back-patch.)
Report and patch by me; the idea to fix it this way is from
Andres Freund.
Discussion: https://postgr.es/m/3384826.1661802235@sss.pgh.pa.us
When a PostgreSQL instance performing archive recovery but not using
standby mode is promoted, and the last WAL segment that it attempted
to read ended in a partial record, the previous code would create
invalid WAL on the new timeline. The WAL from the previously timeline
would be copied to the new timeline up until the end of the last valid
record, but instead of beginning to write WAL at immediately
afterwards, the promoted server would write an overwrite contrecord at
the beginning of the next segment. The end of the previous segment
would be left as all-zeroes, resulting in failures if anything tried
to read WAL from that file.
The root of the issue is that ReadRecord() decides whether to set
abortedRecPtr and missingContrecPtr based on the value of StandbyMode,
but ReadRecord() switches to a new timeline based on the value of
ArchiveRecoveryRequested. We shouldn't try to write an overwrite
contrecord if we're switching to a new timeline, so change the test in
ReadRecod() to check ArchiveRecoveryRequested instead.
Code fix by Dilip Kumar. Comments by me incorporating suggested
language from Álvaro Herrera. Further review from Kyotaro Horiguchi
and Sami Imseih.
Discussion: http://postgr.es/m/CAFiTN-t7umki=PK8dT1tcPV=mOUe2vNhHML6b3T7W7qqvvajjg@mail.gmail.com
Discussion: http://postgr.es/m/FB0DEA0B-E14E-43A0-811F-C1AE93D00FF3%40amazon.com
Prior to v14, if we have a MULTIEXPR SubPlan (that is, use of the syntax
UPDATE ... SET (c1, ...) = (SELECT ...)) in an UPDATE with an inherited
or partitioned target table, inheritance_planner() will clone the
targetlist and therefore also the MULTIEXPR SubPlan and the Param nodes
referencing it for each child target table. Up to now, we've allowed
all the clones to share the underlying subplan as well as the output
parameter IDs -- that is, the runtime ParamExecData slots. That
technique is borrowed from the far older code that supports initplans,
and it works okay in that case because the cloned SubPlan nodes are
essentially identical. So it doesn't matter which one of the clones
the shared ParamExecData.execPlan field might point to.
However, this fails to hold for MULTIEXPR SubPlans, because they can
have nonempty "args" lists (values to be passed into the subplan), and
those lists could get mutated to different states in the various clones.
In the submitted reproducer, as well as the test case added here, one
clone contains Vars with varno OUTER_VAR where another has INNER_VAR,
because the child tables are respectively on the outer or inner side of
the join. Sharing the execPlan pointer can result in trying to evaluate
an args list that doesn't match the local execution state, with mayhem
ensuing. The result often is to trigger consistency checks in the
executor, but I believe this could end in a crash or incorrect updates.
To fix, assign new Param IDs to each of the cloned SubPlans, so that
they don't share ParamExecData slots at runtime. It still seems fine
for the clones to share the underlying subplan, and extra ParamExecData
slots are cheap enough that this fix shouldn't cost much.
This has been busted since we invented MULTIEXPR SubPlans in 9.5.
Probably the lack of previous reports is because query plans in which
the different clones of a MULTIEXPR mutate to effectively-different
states are pretty rare. There's no issue in v14 and later, because
without inheritance_planner() there's never a reason to clone
MULTIEXPR SubPlans.
Per bug #17596 from Andre Lin. Patch v10-v13 only.
Discussion: https://postgr.es/m/17596-c5357f61427a81dc@postgresql.org
SplitToVariants() in the ispell code, lseg_inside_poly() in geo_ops.c,
and regex_selectivity_sub() in selectivity estimation could recurse
until stack overflow; fix by adding check_stack_depth() calls.
So could next() in the regex compiler, but that case is better fixed by
converting its tail recursion to a loop. (We probably get better code
that way too, since next() can now be inlined into its sole caller.)
There remains a reachable stack overrun in the Turkish stemmer, but
we'll need some advice from the Snowball people about how to fix that.
Per report from Egor Chindyaskin and Alexander Lakhin. These mistakes
are old, so back-patch to all supported branches.
Richard Guo and Tom Lane
Discussion: https://postgr.es/m/1661334672.728714027@f473.i.mail.ru
On fast machines, it's possible for applications such as pgbench
to issue connection requests so quickly that the postmaster's
listen queue overflows in the kernel, resulting in unexpected
failures (with not-very-helpful error messages). Most modern OSes
allow the queue size to be increased, so document how to do that.
Per report from Kevin McKibbin.
Discussion: https://postgr.es/m/CADc_NKg2d+oZY9mg4DdQdoUcGzN2kOYXBu-3--RW_hEe0tUV=g@mail.gmail.com
sysctl is more portable than Linux's /proc/sys file tree, and
often easier to use too. That's why most of our docs refer to
sysctl when talking about how to adjust kernel parameters.
Bring the few stragglers into line.
Discussion: https://postgr.es/m/361175.1661187463@sss.pgh.pa.us
When creating a partitioned index, DefineIndex tries to identify
any existing indexes on the partitions that match the partitioned
index, so that it can absorb those as child indexes instead of
building new ones. Part of the matching is to compare IndexInfo
structs --- but that wasn't done quite right. We're comparing
the IndexInfo built within DefineIndex itself to one made from
existing catalog contents by BuildIndexInfo. Notably, while
BuildIndexInfo will run index expressions and predicates through
expression preprocessing, that has not happened to DefineIndex's
struct. The result is failure to match and subsequent creation
of duplicate indexes.
The easiest and most bulletproof fix is to build a new IndexInfo
using BuildIndexInfo, thereby guaranteeing that the processing done
is identical.
While here, let's also extract the opfamily and collation data
from the new partitioned index, removing ad-hoc logic that
duplicated knowledge about how those are constructed.
Per report from Christophe Pettus. Back-patch to v11 where
we invented partitioned indexes.
Richard Guo and Tom Lane
Discussion: https://postgr.es/m/8864BFAA-81FD-4BF9-8E06-7DEB8D4164ED@thebuild.com
The current publisher code checks if UPDATE or DELETE can be executed with
the replica identity of the table even if it's a partitioned table. We can
skip checking the replica identity for partitioned tables because the
operations are actually performed on the leaf partitions (not the
partitioned table).
Reported-by: Brad Nicholson
Author: Hou Zhijie
Reviewed-by: Peter Smith, Amit Kapila
Backpatch-through: 13
Discussion: https://postgr.es/m/CAMMnM%3D8i5DohH%3DYKzV0_wYuYSYvuOJoL9F5nzXTc%2ByzsG1f6rg%40mail.gmail.com
There's a convention that externally-visible libpq functions should
check for a NULL PGconn pointer, and fail gracefully instead of
crashing. PQflush() and PQisnonblocking() didn't get that memo
though. Also add a similar check to PQdefaultSSLKeyPassHook_OpenSSL;
while it's not clear that ordinary usage could reach that with a
null conn pointer, it's cheap enough to check, so let's be consistent.
Daniele Varrazzo and Tom Lane
Discussion: https://postgr.es/m/CA+mi_8Zm_mVVyW1iNFgyMd9Oh0Nv8-F+7Y3-BqwMgTMHuo_h2Q@mail.gmail.com
When enlarging the work buffers of a VarStringSortSupport object,
varstrfastcmp_locale was careful to keep them in the ssup_cxt
memory context; but varstr_abbrev_convert just used palloc().
The latter creates a hazard that the buffers could be freed out
from under the VarStringSortSupport object, resulting in stomping
on whatever gets allocated in that memory later.
In practice, because we only use this code for ICU collations
(cf. 3df9c374e), the problem is confined to use of ICU collations.
I believe it may have been unreachable before the introduction
of incremental sort, too, as traditional sorting usually just
uses one context for the duration of the sort.
We could fix this by making the broken stanzas in varstr_abbrev_convert
match the non-broken ones in varstrfastcmp_locale. However, it seems
like a better idea to dodge the issue altogether by replacing the
pfree-and-allocate-anew coding with repalloc, which automatically
preserves the chunk's memory context. This fix does add a few cycles
because repalloc will copy the chunk's content, which the existing
coding assumes is useless. However, we don't expect that these buffer
enlargement operations are performance-critical. Besides that, it's
far from obvious that copying the buffer contents isn't required, since
these stanzas make no effort to mark the buffers invalid by resetting
last_returned, cache_blob, etc. That seems to be safe upon examination,
but it's fragile and could easily get broken in future, which wouldn't
get revealed in testing with short-to-moderate-size strings.
Per bug #17584 from James Inform. Whether or not the issue is
reachable in the older branches, this code has been broken on its
own terms from its introduction, so patch all the way back.
Discussion: https://postgr.es/m/17584-95c79b4a7d771f44@postgresql.org
It's possible to reach this case when work_mem is very small and tupsize
is (relatively) very large. In that case ExecChooseHashTableSize would
get an assertion failure, or with asserts off it'd compute nbuckets = 0,
which'd likely cause misbehavior later (I've not checked). To fix,
clamp the number of buckets to be at least 1.
This is due to faulty conversion of old my_log2() coding in 28d936031.
Back-patch to v13, as that was.
Zhang Mingli
Discussion: https://postgr.es/m/beb64ca0-91e2-44ac-bf4a-7ea36275ec02@Spark
Most parts of the parser can expect that the stack overflow check
in transformExprRecurse() will trigger before things get desperate.
However, transformFromClauseItem() can recurse directly to self
without having analyzed any expressions, so it's possible to drive
it to a stack-overrun crash. Add a check to prevent that.
Per bug #17583 from Egor Chindyaskin. Back-patch to all supported
branches.
Richard Guo
Discussion: https://postgr.es/m/17583-33be55b9f981f75c@postgresql.org
As of 897795240cfaaed724af2f53ed2c50c9862f951f, check constraints can
be declared invalid. But that patch didn't update _outConstraint() to
also show the relevant struct fields (which were only applicable to
foreign keys before that). This currently only affects debugging
output, so no impact in practice.
96ef3b8ff1cf1950e897fd2f766d4bd9ef0d5d56 accidentally copied a not
applicable comment from the float8_pass_by_value code to the
data_checksums code. Remove that.
87d3b35a1ca31a9d947a8f919a6006679216dff0 changed pg_upgrade to
checking the checksum version rather than just the Boolean presence of
checksums, but didn't change the field type in its ControlData struct
from bool. So this would not work correctly if there ever is a
checksum version larger than 1.
The set of fields printed by _outConstraint() in the CONSTR_IDENTITY
case didn't match the set of fields actually used in that case. (The
code was probably uncarefully copied from the CONSTR_DEFAULT case.)
Fix that by using the right set of fields. Since there is no read
support for this node type, this is really just for debugging output
right now, so it doesn't affect anything important.
Previously, we relied on HEAP2_NEW_CID records and XACT_INVALIDATION
records to know if the transaction has modified the catalog, and that
information is not serialized to snapshot. Therefore, after the restart,
if the logical decoding decodes only the commit record of the transaction
that has actually modified a catalog, we will miss adding its XID to the
snapshot. Thus, we will end up looking at catalogs with the wrong
snapshot.
To fix this problem, this changes the snapshot builder so that it
remembers the last-running-xacts list of the decoded RUNNING_XACTS record
after restoring the previously serialized snapshot. Then, we mark the
transaction as containing catalog changes if it's in the list of initial
running transactions and its commit record has XACT_XINFO_HAS_INVALS. To
avoid ABI breakage, we store the array of the initial running transactions
in the static variables InitialRunningXacts and NInitialRunningXacts,
instead of storing those in SnapBuild or ReorderBuffer.
This approach has a false positive; we could end up adding the transaction
that didn't change catalog to the snapshot since we cannot distinguish
whether the transaction has catalog changes only by checking the COMMIT
record. It doesn't have the information on which (sub) transaction has
catalog changes, and XACT_XINFO_HAS_INVALS doesn't necessarily indicate
that the transaction has catalog change. But that won't be a problem since
we use snapshot built during decoding only to read system catalogs.
On the master branch, we took a more future-proof approach by writing
catalog modifying transactions to the serialized snapshot which avoids the
above false positive. But we cannot backpatch it because of a change in
the SnapBuild.
Reported-by: Mike Oh
Author: Masahiko Sawada
Reviewed-by: Amit Kapila, Shi yu, Takamichi Osumi, Kyotaro Horiguchi, Bertrand Drouvot, Ahsan Hadi
Backpatch-through: 10
Discussion: https://postgr.es/m/81D0D8B0-E7C4-4999-B616-1E5004DBDCD2%40amazon.com