When creating a partitioned index, DefineIndex tries to identify
any existing indexes on the partitions that match the partitioned
index, so that it can absorb those as child indexes instead of
building new ones. Part of the matching is to compare IndexInfo
structs --- but that wasn't done quite right. We're comparing
the IndexInfo built within DefineIndex itself to one made from
existing catalog contents by BuildIndexInfo. Notably, while
BuildIndexInfo will run index expressions and predicates through
expression preprocessing, that has not happened to DefineIndex's
struct. The result is failure to match and subsequent creation
of duplicate indexes.
The easiest and most bulletproof fix is to build a new IndexInfo
using BuildIndexInfo, thereby guaranteeing that the processing done
is identical.
While here, let's also extract the opfamily and collation data
from the new partitioned index, removing ad-hoc logic that
duplicated knowledge about how those are constructed.
Per report from Christophe Pettus. Back-patch to v11 where
we invented partitioned indexes.
Richard Guo and Tom Lane
Discussion: https://postgr.es/m/8864BFAA-81FD-4BF9-8E06-7DEB8D4164ED@thebuild.com
The current publisher code checks if UPDATE or DELETE can be executed with
the replica identity of the table even if it's a partitioned table. We can
skip checking the replica identity for partitioned tables because the
operations are actually performed on the leaf partitions (not the
partitioned table).
Reported-by: Brad Nicholson
Author: Hou Zhijie
Reviewed-by: Peter Smith, Amit Kapila
Backpatch-through: 13
Discussion: https://postgr.es/m/CAMMnM%3D8i5DohH%3DYKzV0_wYuYSYvuOJoL9F5nzXTc%2ByzsG1f6rg%40mail.gmail.com
There's a convention that externally-visible libpq functions should
check for a NULL PGconn pointer, and fail gracefully instead of
crashing. PQflush() and PQisnonblocking() didn't get that memo
though. Also add a similar check to PQdefaultSSLKeyPassHook_OpenSSL;
while it's not clear that ordinary usage could reach that with a
null conn pointer, it's cheap enough to check, so let's be consistent.
Daniele Varrazzo and Tom Lane
Discussion: https://postgr.es/m/CA+mi_8Zm_mVVyW1iNFgyMd9Oh0Nv8-F+7Y3-BqwMgTMHuo_h2Q@mail.gmail.com
When enlarging the work buffers of a VarStringSortSupport object,
varstrfastcmp_locale was careful to keep them in the ssup_cxt
memory context; but varstr_abbrev_convert just used palloc().
The latter creates a hazard that the buffers could be freed out
from under the VarStringSortSupport object, resulting in stomping
on whatever gets allocated in that memory later.
In practice, because we only use this code for ICU collations
(cf. 3df9c374e), the problem is confined to use of ICU collations.
I believe it may have been unreachable before the introduction
of incremental sort, too, as traditional sorting usually just
uses one context for the duration of the sort.
We could fix this by making the broken stanzas in varstr_abbrev_convert
match the non-broken ones in varstrfastcmp_locale. However, it seems
like a better idea to dodge the issue altogether by replacing the
pfree-and-allocate-anew coding with repalloc, which automatically
preserves the chunk's memory context. This fix does add a few cycles
because repalloc will copy the chunk's content, which the existing
coding assumes is useless. However, we don't expect that these buffer
enlargement operations are performance-critical. Besides that, it's
far from obvious that copying the buffer contents isn't required, since
these stanzas make no effort to mark the buffers invalid by resetting
last_returned, cache_blob, etc. That seems to be safe upon examination,
but it's fragile and could easily get broken in future, which wouldn't
get revealed in testing with short-to-moderate-size strings.
Per bug #17584 from James Inform. Whether or not the issue is
reachable in the older branches, this code has been broken on its
own terms from its introduction, so patch all the way back.
Discussion: https://postgr.es/m/17584-95c79b4a7d771f44@postgresql.org
It's possible to reach this case when work_mem is very small and tupsize
is (relatively) very large. In that case ExecChooseHashTableSize would
get an assertion failure, or with asserts off it'd compute nbuckets = 0,
which'd likely cause misbehavior later (I've not checked). To fix,
clamp the number of buckets to be at least 1.
This is due to faulty conversion of old my_log2() coding in 28d936031.
Back-patch to v13, as that was.
Zhang Mingli
Discussion: https://postgr.es/m/beb64ca0-91e2-44ac-bf4a-7ea36275ec02@Spark
Most parts of the parser can expect that the stack overflow check
in transformExprRecurse() will trigger before things get desperate.
However, transformFromClauseItem() can recurse directly to self
without having analyzed any expressions, so it's possible to drive
it to a stack-overrun crash. Add a check to prevent that.
Per bug #17583 from Egor Chindyaskin. Back-patch to all supported
branches.
Richard Guo
Discussion: https://postgr.es/m/17583-33be55b9f981f75c@postgresql.org
As of 897795240cfaaed724af2f53ed2c50c9862f951f, check constraints can
be declared invalid. But that patch didn't update _outConstraint() to
also show the relevant struct fields (which were only applicable to
foreign keys before that). This currently only affects debugging
output, so no impact in practice.
96ef3b8ff1cf1950e897fd2f766d4bd9ef0d5d56 accidentally copied a not
applicable comment from the float8_pass_by_value code to the
data_checksums code. Remove that.
87d3b35a1ca31a9d947a8f919a6006679216dff0 changed pg_upgrade to
checking the checksum version rather than just the Boolean presence of
checksums, but didn't change the field type in its ControlData struct
from bool. So this would not work correctly if there ever is a
checksum version larger than 1.
The set of fields printed by _outConstraint() in the CONSTR_IDENTITY
case didn't match the set of fields actually used in that case. (The
code was probably uncarefully copied from the CONSTR_DEFAULT case.)
Fix that by using the right set of fields. Since there is no read
support for this node type, this is really just for debugging output
right now, so it doesn't affect anything important.
Previously, we relied on HEAP2_NEW_CID records and XACT_INVALIDATION
records to know if the transaction has modified the catalog, and that
information is not serialized to snapshot. Therefore, after the restart,
if the logical decoding decodes only the commit record of the transaction
that has actually modified a catalog, we will miss adding its XID to the
snapshot. Thus, we will end up looking at catalogs with the wrong
snapshot.
To fix this problem, this changes the snapshot builder so that it
remembers the last-running-xacts list of the decoded RUNNING_XACTS record
after restoring the previously serialized snapshot. Then, we mark the
transaction as containing catalog changes if it's in the list of initial
running transactions and its commit record has XACT_XINFO_HAS_INVALS. To
avoid ABI breakage, we store the array of the initial running transactions
in the static variables InitialRunningXacts and NInitialRunningXacts,
instead of storing those in SnapBuild or ReorderBuffer.
This approach has a false positive; we could end up adding the transaction
that didn't change catalog to the snapshot since we cannot distinguish
whether the transaction has catalog changes only by checking the COMMIT
record. It doesn't have the information on which (sub) transaction has
catalog changes, and XACT_XINFO_HAS_INVALS doesn't necessarily indicate
that the transaction has catalog change. But that won't be a problem since
we use snapshot built during decoding only to read system catalogs.
On the master branch, we took a more future-proof approach by writing
catalog modifying transactions to the serialized snapshot which avoids the
above false positive. But we cannot backpatch it because of a change in
the SnapBuild.
Reported-by: Mike Oh
Author: Masahiko Sawada
Reviewed-by: Amit Kapila, Shi yu, Takamichi Osumi, Kyotaro Horiguchi, Bertrand Drouvot, Ahsan Hadi
Backpatch-through: 10
Discussion: https://postgr.es/m/81D0D8B0-E7C4-4999-B616-1E5004DBDCD2%40amazon.com
fmgr_sql must make expanded-datum arguments read-only, because
it's possible that the function body will pass the argument to
more than one callee function. If one of those functions takes
the datum's R/W property as license to scribble on it, then later
callees will see an unexpected value, leading to wrong answers.
From a performance standpoint, it'd be nice to skip this in the
common case that the argument value is passed to only one callee.
However, detecting that seems fairly hard, and certainly not
something that I care to attempt in a back-patched bug fix.
Per report from Adam Mackler. This has been broken since we
invented expanded datums, so back-patch to all supported branches.
Discussion: https://postgr.es/m/WScDU5qfoZ7PB2gXwNqwGGgDPmWzz08VdydcPFLhOwUKZcdWbblbo-0Lku-qhuEiZoXJ82jpiQU4hOjOcrevYEDeoAvz6nR0IU4IHhXnaCA=@mackler.email
Discussion: https://postgr.es/m/187436.1660143060@sss.pgh.pa.us
Per buildfarm, the output order of \dx+ isn't consistent across
locales. Apply NO_LOCALE to force C locale. There might be a
more localized way, but I'm not seeing it offhand, and anyway
there is nothing in this test module that particularly cares
about locales.
Security: CVE-2022-2625
Previously, if an extension script did CREATE OR REPLACE and there was
an existing object not belonging to the extension, it would overwrite
the object and adopt it into the extension. This is problematic, first
because the overwrite is probably unintentional, and second because we
didn't change the object's ownership. Thus a hostile user could create
an object in advance of an expected CREATE EXTENSION command, and would
then have ownership rights on an extension object, which could be
modified for trojan-horse-type attacks.
Hence, forbid CREATE OR REPLACE of an existing object unless it already
belongs to the extension. (Note that we've always forbidden replacing
an object that belongs to some other extension; only the behavior for
previously-free-standing objects changes here.)
For the same reason, also fail CREATE IF NOT EXISTS when there is
an existing object that doesn't belong to the extension.
Our thanks to Sven Klemm for reporting this problem.
Security: CVE-2022-2625
Commit 59be1c942a47 already tried to make
src/test/recovery/t/033_replay_tsp_drops more reliable, but it wasn't
enough. Try to improve on that by making this use of a replication slot
to be more like others. Also, don't drop the slot.
Make a few other stylistic changes while at it. It's still quite slow,
which is another thing that we need to fix in this script.
Backpatch to all supported branches.
Discussion: https://postgr.es/m/349302.1659191875@sss.pgh.pa.us
On closer inspection, mcv.c isn't as broken for ScalarArrayOpExpr
as I thought. The Var-on-right issue is real enough, but actually
it does cope fine with a NULL array constant --- I was misled by
an XXX comment suggesting it didn't. Undo that part of the code
change, and replace the XXX comment with something less misleading.
statext_is_compatible_clause_internal() checked that the arguments
of a ScalarArrayOpExpr are one Var and one Const, but it would allow
cases where the Const was on the left. Subsequent uses of the clause
are not expecting that and would suffer assertion failures or core
dumps. mcv.c also had not bothered to cope with the case of a NULL
array constant, which seems really unacceptably sloppy of somebody.
(Although our tools failed us there too, since AFAIK neither Coverity
nor any compiler warned of the obvious use-of-uninitialized-variable
condition.) It seems best to handle that by having
statext_is_compatible_clause_internal() reject it.
Noted while fixing bug #17570. Back-patch to v13 where the
extended stats code grew some awareness of ScalarArrayOpExpr.
This makes it more convenient for git config to contain the
blame.ignoreRevsFile setting; otherwise current git versions complain if
the file is not present.
I constructed the file for each branch by scraping the file in branch
master for commits that appear in that branch. Because a few additional
pgindent commits have been added to the list in master since the list
was first created, this also propagates those to branches 14 and 15
where the file already existed. Also, some entries appear to have been
made using author-date rather than committer-date in the format string,
so some timestamps are changed. Also remove bogus whitespace in the
suggested `git log` format string.
Backpatch to all supported branches.
Discussion: https://postgr.es/m/20220711163138.o72evdeus5f5yy5z@alvherre.pgsql
The five commits ending at cc2c7d65fc27e877c9f407587b0b92d46cd6dd16
closed this race condition for v15+. For v14 through v10, add a HINT to
discourage studying the cosmetic problem.
Reviewed by Kyotaro Horiguchi and David Steele.
Discussion: https://postgr.es/m/20220731061747.GA3692882@rfd.leadboat.com
Having additional triggers in a test table made the ORDER BY clauses in
old queries underspecified. Add another column there for stability.
Per sporadic buildfarm pink.
Using ATSimpleRecursion() in ATPrepCmd() to do so as bbb927b4db9b did is
not correct, because ATPrepCmd() can't distinguish between triggers that
may be cloned and those that may not, so would wrongly try to recurse
for the latter category of triggers.
So this commit restores the code in EnableDisableTrigger() that
86f575948c77 had added to do the recursion, which would do it only for
triggers that may be cloned, that is, row-level triggers. This also
changes tablecmds.c such that ATExecCmd() is able to pass the value of
ONLY flag down to EnableDisableTrigger() using its new 'recurse'
parameter.
This also fixes what seems like an oversight of 86f575948c77 that the
recursion to partition triggers would only occur if EnableDisableTrigger()
had actually changed the trigger. It is more apt to recurse to inspect
partition triggers even if the parent's trigger didn't need to be
changed: only then can we be certain that all descendants share the same
state afterwards.
Backpatch all the way back to 11, like bbb927b4db9b. Care is taken not
to break ABI compatibility (and that no catversion bump is needed.)
Co-authored-by: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Dmitry Koval <d.koval@postgrespro.ru>
Discussion: https://postgr.es/m/CA+HiwqG-cZT3XzGAnEgZQLoQbyfJApVwOTQaCaas1mhpf+4V5A@mail.gmail.com
Ordinarily the functions called in this loop ought to have plenty
of CFIs themselves; but we've now seen a case where no such CFI is
reached, making the loop uninterruptible. Even though that's from
a recently-introduced bug, it seems prudent to install a CFI at
the loop level in all branches.
Per discussion of bug #17558 from Andrew Kesper (an actual fix for
that bug will follow).
Discussion: https://postgr.es/m/17558-3f6599ffcf52fd4a@postgresql.org
Remove the test case added by commit fac1b470a, which never actually
worked to expose the problem it claimed to test. Replace it with
a case that does expose the problem, and also covers the SRF-not-
at-the-top deficiency repaired in 1aa8dad41.
Richard Guo, with some editorialization by me
Discussion: https://postgr.es/m/17564-c7472c2f90ef2da3@postgresql.org
Commit fac1b470a thought we could check for set-returning functions
by testing only the top-level node in an expression tree. This is
wrong in itself, and to make matters worse it encouraged others
to make the same mistake, by exporting tlist.c's special-purpose
IS_SRF_CALL() as a widely-visible macro. I can't find any evidence
that anyone's taken the bait, but it was only a matter of time.
Use expression_returns_set() instead, and stuff the IS_SRF_CALL()
genie back in its bottle, this time with a warning label. I also
added a couple of cross-reference comments.
After a fair amount of fooling around, I've despaired of making
a robust test case that exposes the bug reliably, so no test case
here. (Note that the test case added by fac1b470a is itself
broken, in that it doesn't notice if you remove the code change.
The repro given by the bug submitter currently doesn't fail either
in v15 or HEAD, though I suspect that may indicate an unrelated bug.)
Per bug #17564 from Martijn van Oosterhout. Back-patch to v13,
as the faulty patch was.
Discussion: https://postgr.es/m/17564-c7472c2f90ef2da3@postgresql.org
The sto_using_cursor and sto_using_select tests were coded to exercise
every permutation of their test steps, but AFAICS there is no value in
exercising more than one. This matters because each permutation costs
about six seconds, thanks to the "pg_sleep(6)". Perhaps we could
reduce that, but the useless permutations seem worth getting rid of
in any case. (Note that sto_using_hash_index got it right already.)
While here, clean up some other sloppiness such as an unused table.
This doesn't make too much difference in interactive testing, since the
wasted time is typically masked by parallelization with other tests.
However, the buildfarm runs this as a serial step, which means we can
expect to shave ~40 seconds from every buildfarm run. That makes it
worth back-patching.
Discussion: https://postgr.es/m/2515192.1659454702@sss.pgh.pa.us
We've heard a couple of reports of people having trouble with
multi-gigabyte-sized query-texts files. It occurred to me that on
32-bit platforms, there could be an issue with integer overflow
of calculations associated with the total query text size.
Address that with several changes:
1. Limit pg_stat_statements.max to INT_MAX / 2 not INT_MAX.
The hashtable code will bound it to that anyway unless "long"
is 64 bits. We still need overflow guards on its use, but
this helps.
2. Add a check to prevent extending the query-texts file to
more than MaxAllocHugeSize. If it got that big, qtext_load_file
would certainly fail, so there's not much point in allowing it.
Without this, we'd need to consider whether extent, query_offset,
and related variables shouldn't be off_t not size_t.
3. Adjust the comparisons in need_gc_qtexts() to be done in 64-bit
arithmetic on all platforms. It appears possible that under duress
those multiplications could overflow 32 bits, yielding a false
conclusion that we need to garbage-collect the texts file, which
could lead to repeatedly garbage-collecting after every hash table
insertion.
Per report from Bruno da Silva. I'm not convinced that these
issues fully explain his problem; there may be some other bug that's
contributing to the query-texts file becoming so large in the first
place. But it did get that big, so #2 is a reasonable defense,
and #3 could explain the reported performance difficulties.
(See also commit 8bbe4cbd9, which addressed some related bugs.
The second Discussion: link is the thread that led up to that.)
This issue is old, and is primarily a problem for old platforms,
so back-patch.
Discussion: https://postgr.es/m/CAB+Nuk93fL1Q9eLOCotvLP07g7RAv4vbdrkm0cVQohDVMpAb9A@mail.gmail.com
Discussion: https://postgr.es/m/5601D354.5000703@BlueTreble.com
I thought commit fd96d14d9 had plugged all the holes of this sort,
but no, function RTEs could produce oversize tuples too, either
via long coldeflists or just from multiple functions in one RTE.
(I'm pretty sure the other variants of base RTEs aren't a problem,
because they ultimately refer to either a table or a sub-SELECT,
whose widths are enforced elsewhere. But we explicitly allow join
RTEs to be overwidth, as long as you don't try to form their
tuple result.)
Per further discussion of bug #17561. As before, patch all branches.
Discussion: https://postgr.es/m/17561-80350151b9ad2ad4@postgresql.org
errno was not reported correctly after attempting to clone a file,
leading to incorrect error reports. While scanning through the code, I
have not noticed any similar mistakes.
Error introduced in 3a769d8.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20220731134135.GY15006@telsasoft.com
Backpatch-through: 12
The new test is from commit 9e4f914b5e.
With this setting messages have SQL error numbers included, so that
needs to be provided for in the pattern looked for.
Backpatch to all live branches like the original.
A RowExpr with more than MaxTupleAttributeNumber columns would fail at
execution anyway, since we cannot form a tuple datum with more than that
many columns. While heap_form_tuple() has a check for too many columns,
it emerges that there are some intermediate bits of code that don't
check and can be driven to failure with sufficiently many columns.
Checking this at parse time seems like the most appropriate place to
install a defense, since we already check SELECT list length there.
While at it, make the SELECT-list-length error use the same errcode
(TOO_MANY_COLUMNS) as heap_form_tuple does, rather than the generic
PROGRAM_LIMIT_EXCEEDED.
Per bug #17561 from Egor Chindyaskin. The given test case crashes
in all supported branches (and probably a lot further back),
so patch all.
Discussion: https://postgr.es/m/17561-80350151b9ad2ad4@postgresql.org
On FreeBSD, the new test fails due to a WAL file being removed before
the standby has had the chance to copy it. Fix by adding a replication
slot to prevent the removal until after the standby has connected.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reported-by: Matthias van de Meent <boekewurm+postgres@gmail.com>
Discussion: https://postgr.es/m/CAEze2Wj5nau_qpjbwihvmXLfkAWOZ5TKdbnqOc6nKSiRJEoPyQ@mail.gmail.com
Crash recovery on standby may encounter missing directories
when replaying database-creation WAL records. Prior to this
patch, the standby would fail to recover in such a case;
however, the directories could be legitimately missing.
Consider the following sequence of commands:
CREATE DATABASE
DROP DATABASE
DROP TABLESPACE
If, after replaying the last WAL record and removing the
tablespace directory, the standby crashes and has to replay the
create database record again, crash recovery must be able to continue.
A fix for this problem was already attempted in 49d9cfc68bf4, but it
was reverted because of design issues. This new version is based
on Robert Haas' proposal: any missing tablespaces are created
during recovery before reaching consistency. Tablespaces
are created as real directories, and should be deleted
by later replay. CheckRecoveryConsistency ensures
they have disappeared.
The problems detected by this new code are reported as PANIC,
except when allow_in_place_tablespaces is set to ON, in which
case they are WARNING. Apart from making tests possible, this
gives users an escape hatch in case things don't go as planned.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Author: Asim R Praveen <apraveen@pivotal.io>
Author: Paul Guo <paulguo@gmail.com>
Reviewed-by: Anastasia Lubennikova <lubennikovaav@gmail.com> (older versions)
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com> (older versions)
Reviewed-by: Michaël Paquier <michael@paquier.xyz>
Diagnosed-by: Paul Guo <paulguo@gmail.com>
Discussion: https://postgr.es/m/CAEET0ZGx9AvioViLf7nbR_8tH9-=27DN5xWJ2P9-ROH16e4JUA@mail.gmail.com
This is a backpatch to branches 10-14 of the following commits:
7170f2159fb2 Allow "in place" tablespaces.
c6f2f01611d4 Fix pg_basebackup with in-place tablespaces.
f6f0db4d6240 Fix pg_tablespace_location() with in-place tablespaces
7a7cd84893e0 doc: Remove mention to in-place tablespaces for pg_tablespace_location()
5344723755bd Remove unnecessary Windows-specific basebackup code.
In-place tablespaces were introduced as a testing helper mechanism, but
they are going to be used for a bugfix in WAL replay to be backpatched
to all stable branches.
I (Álvaro) had to adjust some code to account for lack of
get_dirent_type() in branches prior to 14.
Author: Thomas Munro <thomas.munro@gmail.com>
Author: Michaël Paquier <michael@paquier.xyz>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20220722081858.omhn2in5zt3g4nek@alvherre.pgsql
We have a few commands that "can't run in a transaction block",
meaning that if they complete their processing but then we fail
to COMMIT, we'll be left with inconsistent on-disk state.
However, the existing defenses for this are only watertight for
simple query protocol. In extended protocol, we didn't commit
until receiving a Sync message. Since the client is allowed to
issue another command instead of Sync, we're in trouble if that
command fails or is an explicit ROLLBACK. In any case, sitting
in an inconsistent state while waiting for a client message
that might not come seems pretty risky.
This case wasn't reachable via libpq before we introduced pipeline
mode, but it's always been an intended aspect of extended query
protocol, and likely there are other clients that could reach it
before.
To fix, set a flag in PreventInTransactionBlock that tells
exec_execute_message to force an immediate commit. This seems
to be the approach that does least damage to existing working
cases while still preventing the undesirable outcomes.
While here, add some documentation to protocol.sgml that explicitly
says how to use pipelining. That's latent in the existing docs if
you know what to look for, but it's better to spell it out; and it
provides a place to document this new behavior.
Per bug #17434 from Yugo Nagata. It's been wrong for ages,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/17434-d9f7a064ce2a88a3@postgresql.org