The transactions and subtransactions array that was allocated under
snapshot builder memory context and recorded during decoding was not
cleared in case of errors. This can result in an assertion failure if we
attempt to retry logical decoding within the same session. To address this
issue, we register a callback function under the snapshot builder memory
context to clear the recorded transactions and subtransactions array along
with the context.
This problem doesn't exist in PG16 and HEAD as instead of using
InitialRunningXacts, we added the list of transaction IDs and
sub-transaction IDs, that have modified catalogs and are running during
snapshot serialization, to the serialized snapshot (see commit 7f13ac8123).
Author: Hou Zhijie
Reviewed-by: Amit Kapila
Backpatch-through: 11
Discussion: http://postgr.es/m/18055-ab3beed9f4b7b7d6@postgresql.org
The new test added by commit 68a59f9e9 disables the subscription and
manually drops the associated replication slot. However, since
disabling the subsubscription doesn't wait for a walsender to release
the replication slot and exit, pg_drop_replication_slot() could
fail. Avoid failure by adding a wait for the replication slot to
become inactive.
Reported-by: Hou Zhijie, as per buildfarm
Reviewed-by: Hou Zhijie
Discussion: https://postgr.es/m/OS0PR01MB571682316378379AA34854F694E9A%40OS0PR01MB5716.jpnprd01.prod.outlook.com
Backpatch-through: 15
Extend the PG_TEST_EXTRA documentation to mention resource intensive
tests as well. The previous wording only mentioned special software
and security in the main paragraph, with resource usage listed on one
of the tests in the list.
Backpatch to v15 where f47ed79cc8 added wal_consistenct_checking as
a PG_TEST_EXTRA target.
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/CAN55FZ0OthTuBdiNkaX2BvxuHdK4Y1MVEb8_uEuD1yHMPmT9Og@mail.gmail.com
Backpatch-through: 15
pgbouncer can cause PQbackendPID() to return negative values due to it
filling be_pid with random bytes (even these days pid_max can only be
set up to 2^22 on 64b machines on Linux, for example, so this cannot
happen with normal PID numbers). When this happens, pg_basebackup may
generate a temporary slot name that may not be accepted by the parser,
leading to spurious failures, like:
pg_basebackup: error: could not send replication command
ERROR: replication slot name "pg_basebackup_-1201966863" contains
invalid character
This commit fixes that problem by formatting the result from
PQbackendPID() as an unsigned integer when creating the temporary
replication slot name, so as the invalid character is gone and the
command can be parsed.
Author: Jelte Fennema
Reviewed-by: Daniel Gustafsson, Nishant Sharma
Discussion: https://postgr.es/m/CAGECzQQOGvYfp8ziF4fWQ_o8s2K7ppaoWBQnTmdakn3s-4Z=5g@mail.gmail.com
Backpatch-through: 11
This test fails due to known bugs in the test and the server. Those
will be fixed in master shortly and possibly back-patched a bit later,
but in the meantime it is unhelpful for package maintainers if the tests
randomly fail, and it's not a good time to make complex changes in 16.
This had already been done for older branches prior to 15's release.
Now we're about to release 16, and Debian's test builds are regularly
failing on one architecture, so let's do the same for 15 and 16.
Reported-by: Christoph Berg <myon@debian.org>
Reported-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://postgr.es/m/CALj2ACVr8au2J_9D88UfRCi0JdWhyQDDxAcSVav0B0irx9nXEg%40mail.gmail.com
This could lead to an imprecise choice when splitting an index page of a
GiST index on a tsvector, deciding which entries should remain on the
old page and which entries should move to a new page.
This is wrong since tsearch2 has been moved into core with commit
140d4ebcb46e, so backpatch all the way down. This error has been
spotted by valgrind.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/17950-6c80a8d2b94ec695@postgresql.org
Backpatch-through: 11
Dropping a database while a connection is attempted on it was able to
lead to the presence of valid database entries in shared statistics.
The issue is that MyDatabaseId was getting set too early than it should,
as, if the connection attempted on the dropped database fails when
renamed or dropped, the shutdown callback of the shared statistics would
finish by re-inserting a correct entry related to the database already
dropped.
As analyzed by the bug reporters, this issue could lead to phantom
entries in the database list maintained by the autovacuum launcher
(in rebuild_database_list()) if the database dropped was part of the
database list when it was still valid. After the database was dropped,
it would remain the highest on the list of databases to considered by
the autovacuum worker as things to process. This would prevent
autovacuum jobs to happen on all the other databases still present.
The commit fixes this issue by delaying setting MyDatabaseId until the
database existence has been re-checked with the second scan on
pg_database after getting a shared lock on it, and by switching
pgstat_update_dbstats() so as nothing happens if MyDatabaseId is not
valid.
Issue introduced by 5891c7a8ed8f, so backpatch down to 15.
Reported-by: Will Mortensen, Jacob Speidel
Analyzed-by: Will Mortensen, Jacob Speidel
Author: Andres Freund
Discussion: https://postgr.es/m/17973-bca1f7d5c14f601e@postgresql.org
Backpatch-through: 15
nFreeBlocks, defined as a long, stores the number of free blocks in a
logical tape. ltsGetFreeBlock() has been using an int to store the
value of nFreeBlocks, which could lead to overflows on platforms where
long and int are not the same size (in short everything except Windows
where long is 4 bytes).
The problematic intermediate variable is switched to be a long instead
of an int.
Issue introduced by c02fdc9223015, so backpatch down to 13.
Author: Ranier vilela
Reviewed-by: Peter Geoghegan, David Rowley
Discussion: https://postgr.es/m/CAEudQApLDWCBR_xmwNjGBrDo+f+S4E87x3s7-+hoaKqYdtC4JQ@mail.gmail.com
Backpatch-through: 13
After commit b0bea38705, syslogger prints 63 warnings about failing to
close a listen socket at postmaster startup. That's because the
syslogger process forks before the ListenSockets array is initialized,
so ClosePostmasterPorts() calls "close(0)" 64 times. The first call
succeeds, because fd 0 is stdin.
This has been like this since commit 9a86f03b4e in version 13, which
moved the SysLogger_Start() call to before initializing ListenSockets.
We just didn't notice until commit b0bea38705 added the LOG message.
Reported by Michael Paquier and Jeff Janes.
Author: Michael Paquier
Discussion: https://www.postgresql.org/message-id/ZOvvuQe0rdj2slA9%40paquier.xyz
Discussion: https://www.postgresql.org/message-id/ZO0fgDwVw2SUJiZx@paquier.xyz#482670177eb4eaf4c9f03c1eed963e5f
Backpatch-through: 13
Revalidation of a plancache entry (after a cache invalidation event)
requires acquiring a snapshot. Normally that is harmless, but not
if the cached statement is one that needs to run without acquiring a
snapshot. We were already aware of that for TransactionStmts,
but for some reason hadn't extrapolated to the other statements that
PlannedStmtRequiresSnapshot() knows mustn't set a snapshot. This can
lead to unexpected failures of commands such as SET TRANSACTION
ISOLATION LEVEL. We can fix it in the same way, by excluding those
command types from revalidation.
However, we can do even better than that: there is no need to
revalidate for any statement type for which parse analysis, rewrite,
and plan steps do nothing interesting, which is nearly all utility
commands. To mechanize this, invent a parser function
stmt_requires_parse_analysis() that tells whether parse analysis does
anything beyond wrapping a CMD_UTILITY Query around the raw parse
tree. If that's what it does, then rewrite and plan will just
skip the Query, so that it is not possible for the same raw parse
tree to produce a different plan tree after cache invalidation.
stmt_requires_parse_analysis() is basically equivalent to the
existing function analyze_requires_snapshot(), except that for
obscure reasons that function omits ReturnStmt and CallStmt.
It is unclear whether those were oversights or intentional.
I have not been able to demonstrate a bug from not acquiring a
snapshot while analyzing these commands, but at best it seems mighty
fragile. It seems safer to acquire a snapshot for parse analysis of
these commands too, which allows making stmt_requires_parse_analysis
and analyze_requires_snapshot equivalent.
In passing this fixes a second bug, which is that ResetPlanCache
would exclude ReturnStmts and CallStmts from revalidation.
That's surely *not* safe, since they contain parsable expressions.
Per bug #18059 from Pavel Kulakov. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/18059-79c692f036b25346@postgresql.org
See prior commit for an explanation for the goal of the change and why it had
to be split into two commits.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/20230808021541.7lbzdefvma7qmn3w@awork3.anarazel.de
Backpatch: 15-, where CI support was added
cirrus-ci will soon restrict the amount of free resources every user gets (as
have many other CI providers). For most users of CI that should not be an
issue. But e.g. for cfbot it will be an issue.
To allow configuring different resources on a per-repository basis, introduce
infrastructure for overriding the task execution environment. Unfortunately
this is not entirely trivial, as yaml anchors have to be defined before their
use, and cirrus-ci only allows injecting additional contents at the end of
.cirrus.yml.
To deal with that, move the definition of the CI tasks to
.cirrus.tasks.yml. The main .cirrus.yml is loaded first, then, if defined, the
file referenced by the REPO_CI_CONFIG_GIT_URL variable, will be added,
followed by the contents of .cirrus.tasks.yml. That allows
REPO_CI_CONFIG_GIT_URL to override the yaml anchors defined in .cirrus.yml.
Unfortunately git's default merge / rebase strategy does not handle copied
files, just renamed ones. To avoid painful rebasing over this change, this
commit just renames .cirrus.yml to .cirrus.tasks.yml, without adding a new
.cirrus.yml. That's done in the followup commit, which moves the relevant
portion of .cirrus.tasks.yml to .cirrus.yml. Until that is done,
REPO_CI_CONFIG_GIT_URL does not fully work.
The subsequent commit adds documentation for how to configure custom compute
resources to src/tools/ci/README
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/20230808021541.7lbzdefvma7qmn3w@awork3.anarazel.de
Backpatch: 15-, where CI support was added
The main reason for this change is to reduce different ways of executing
tasks, making it easier to use custom compute resources for cfbot. A secondary
benefit is that the tasks seem slightly faster this way, apparently the
increased startup overhead is outweighed by reduced runtime overhead.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/20230808021541.7lbzdefvma7qmn3w@awork3.anarazel.de
Backpatch: 15-, where CI support was added
The number of CPUs is the cost-determining factor. Most instance types that
run tests have more memory/core than what we specified, there's no real
benefit in wasting that.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/20230808021541.7lbzdefvma7qmn3w@awork3.anarazel.de
Backpatch: 15-, where CI support was added
The list of external language drivers and procedural languages was
never complete or exhaustive, and rather than attempting to manage
it the content has migrated to the wiki. This replaces the tables
altogether with links to the wiki as we regularly get requests for
adding various projects, which we reject without any clear policy
for why or how the content should be managed.
The threads linked to below are the most recent discussions about
this, the archives contain many more.
Backpatch to all supported branches since the list on the wiki
applies to all branches.
Author: Jonathan Katz <jkatz@postgresql.org>
Discussion: https://postgr.es/m/169165415312.635.10247434927885764880@wrigleys.postgresql.org
Discussion: https://postgr.es/m/169177958824.635.11087800083040275266@wrigleys.postgresql.org
Backpatch-through: v11
Attribute missing values might be needed past the lifetime of the tuple
descriptors from which they are extracted. To avoid possibly using
pointers for by-reference values which might thus be left dangling, we
cache a datumCopy'd version of the datum in the TopMemoryContext. Since
we first search for the value this only needs to be done once per
session for any such value.
Original complaint from Tom Lane, idea for mitigation by Andrew Dunstan,
tweaked by Tom Lane.
Backpatch to version 11 where missing values were introduced.
Discussion: https://postgr.es/m/1306569.1687978174@sss.pgh.pa.us
This commit fixes the function of $subject for shared relations. This
feature has been added by e042678. Unfortunately, this new behavior got
removed by 5891c7a when moving statistics to shared memory.
Reported-by: Mitsuru Hinata
Author: Masahiro Ikeda
Reviewed-by: Kyotaro Horiguchi, Masahiko Sawada
Discussion: https://postgr.es/m/7cc69f863d9b1bc677544e3accd0e4b4@oss.nttdata.com
Backpatch-through: 15
A significant chunk of the time on the macos CI task is spent installing
packages using homebrew. The downloads of the packages are cached, but the
installation needs to happen every time. We can't cache the whole homebrew
installation, because it is too large due to pre-installed packages.
Speed this up by installing packages using macports and caching the
installation as .dmg. That's a lot faster than unpacking a tarball.
In addition, don't install llvm - it wasn't enabled when building, so it's
just a waste of time/space.
This substantially speeds up the mac CI time, both in the cold cache and in
the warm cache case (the latter from ~1m20s to ~5s).
It doesn't seem great to have diverging sources of packages for CI between
branches, so backpatch to 15 (where CI was added).
Discussion: https://postgr.es/m/20230805202539.r3umyamsnctysdc7@awork3.anarazel.de
Backpatch: 15-, where CI was added
The fix itself is fine, but the test revealed other problems related
to parallel query that are not easily fixable. Remove the test for
now to fix the buildfarm.
Discussion: https://postgr.es/m/88825.1691665432@sss.pgh.pa.us
Backpatch-through: 11
Substituting such values in extension scripts facilitated SQL injection
when @extowner@, @extschema@, or @extschema:...@ appeared inside a
quoting construct (dollar quoting, '', or ""). No bundled extension was
vulnerable. Vulnerable uses do appear in a documentation example and in
non-bundled extensions. Hence, the attack prerequisite was an
administrator having installed files of a vulnerable, trusted,
non-bundled extension. Subject to that prerequisite, this enabled an
attacker having database-level CREATE privilege to execute arbitrary
code as the bootstrap superuser. By blocking this attack in the core
server, there's no need to modify individual extensions. Back-patch to
v11 (all supported versions).
Reported by Micah Gate, Valerie Woolard, Tim Carey-Smith, and Christoph
Berg.
Security: CVE-2023-39417
The use of Memoize was already disabled in normal joins when the join
conditions had volatile functions per the code in
match_opclause_to_indexcol(). Ordinarily, the parameterization for the
inner side of a nested loop will be an Index Scan or at least eventually
lead to an index scan (perhaps nested several joins deep). However, for
lateral joins, that's not the case and seq scans can be parameterized
too, so we can't rely on match_opclause_to_indexcol().
Here we explicitly check the parameterization for volatile functions and
don't consider the generation of a Memoize path when such functions
are present.
Author: Richard Guo
Discussion: https://postgr.es/m/CAMbWs49nHFnHbpepLsv_yF3qkpCS4BdB-v8HoJVv8_=Oat0u_w@mail.gmail.com
Backpatch-through: 14, where Memoize was introduced
If MERGE executes an UPDATE action on a table with row-level security,
the code incorrectly applied the WITH CHECK clauses from the target
table's INSERT policies to new rows, instead of the clauses from the
table's UPDATE policies. In addition, it failed to check new rows
against the target table's SELECT policies, if SELECT permissions were
required (likely to always be the case).
In addition, if MERGE executes a DO NOTHING action for matched rows,
the code incorrectly applied the USING clauses from the target table's
DELETE policies to existing target tuples. These policies were applied
as checks that would throw an error, if they did not pass.
Fix this, so that a MERGE UPDATE action applies the same RLS policies
as a plain UPDATE query with a WHERE clause, and a DO NOTHING action
does not apply any RLS checks (other than adding clauses from SELECT
policies to the join).
Back-patch to v15, where MERGE was introduced.
Dean Rasheed, reviewed by Stephen Frost.
Security: CVE-2023-39418
This was failing for queries which try to get the .type() of a
jpiLikeRegex. For example:
select jsonb_path_query('["string", "string"]',
'($[0] like_regex ".{7}").type()');
Reported-by: Alexander Kozhemyakin
Bug: #18035
Discussion: https://postgr.es/m/18035-64af5cdcb5adf2a9@postgresql.org
Backpatch-through: 12, where SQL/JSON path was added.
Commit e7cb7ee14, which introduced the infrastructure for FDWs and
custom scan providers to replace joins with scans, failed to add support
handling of pseudoconstant quals assigned to replaced joins in
createplan.c, leading to an incorrect plan without a gating Result node
when postgres_fdw replaced a join with such a qual.
To fix, we could add the support by 1) modifying the ForeignPath and
CustomPath structs to store the list of RestrictInfo nodes to apply to
the join, as in JoinPaths, if they represent foreign and custom scans
replacing a join with a scan, and by 2) modifying create_scan_plan() in
createplan.c to use that list in that case, instead of the
baserestrictinfo list, to get pseudoconstant quals assigned to the join;
but #1 would cause an ABI break. So fix by modifying the infrastructure
to just disallow replacing joins with such quals.
Back-patch to all supported branches.
Reported by Nishant Sharma. Patch by me, reviewed by Nishant Sharma and
Richard Guo.
Discussion: https://postgr.es/m/CADrsxdbcN1vejBaf8a%2BQhrZY5PXL-04mCd4GDu6qm6FigDZd6Q%40mail.gmail.com
Historically, hba.c limited tokens in the authentication configuration
files (pg_hba.conf and pg_ident.conf) to less than 256 bytes. We have
seen a few reports of this limit causing problems; notably, for
moderately-complex LDAP configurations. Increase the limit to 10240
bytes as a low-risk stop-gap solution.
In v13 and earlier, this also requires raising MAX_LINE, the limit
on overall line length. I'm hesitant to make this code consume
too much stack space, so I only raised that to 20480 bytes.
Discussion: https://postgr.es/m/1588937.1690221208@sss.pgh.pa.us
We include the message type while displaying an error context in the
apply worker. Now, while retrieving the message type string if the
message type is unknown we throw an error that will hide the original
error. So, instead, we need to simply return the string indicating an
unknown message type.
Reported-by: Ashutosh Bapat
Author: Euler Taveira, Amit Kapila
Reviewed-by: Ashutosh Bapat
Backpatch-through: 15
Discussion: https://postgr.es/m/CAExHW5suAEDW-mBZt_qu4RVxWZ1vL54-L+ci2zreYWebpzxYsA@mail.gmail.com
Some of the test_decoding test output was extremely wide, because it
deals with massive toasted values, and the aligned mode causes psql to
produce 200kB of whitespace and dashes. Change to unaligned mode
temporarily to avoid that behavior.
Backpatch to 14, where it applies cleanly.
Discussion: https://postgr.es/m/20230405103953.sxleixp3uz5lazst@alvherre.pgsql
If both the passed-in plan pointer and plansource->gplan are
NULL, CachedPlanIsSimplyValid would think that the plan pointer
is possibly-valid and try to dereference it. For the one extant
call site in plpgsql, this situation doesn't normally happen
which is why we've not noticed. However, it appears to be possible
if the previous use of the cached plan failed, as per report from
Justin Pryzby. Add an extra check to prevent crashing.
Back-patch to v13 where this code was added.
Discussion: https://postgr.es/m/ZLlV+STFz1l/WhAQ@telsasoft.com
IN and NOT IN work fine on records and arrays, so just say that
they accept "expressions" not "scalar expressions". I think that
that phrasing was meant to say that they don't work on set-returning
expressions, but that's not the common meaning of "scalar".
Revise the description of row-constructor comparisons to make it
perhaps a bit less confusing. (This partially reverts some
dubious wording changes made by commit f56651519.)
Per gripe from Ilya Nenashev. Back-patch to supported branches.
In HEAD and v16, also drop a NOTE about pre-8.2 behavior, which
is hopefully no longer of interest to anybody.
Discussion: https://postgr.es/m/168968062460.632.14303906825812821399@wrigleys.postgresql.org
The "count" argument of SPI_exec() only limits execution when
the query is actually returning rows. This was not the case
before PG 9.0, so this example was correct when written; but
we missed updating it in commit 2ddc600f8. Extend the example
to show the behavior both with and without RETURNING.
While here, improve the commentary and markup for the rest
of the example.
David G. Johnston and Tom Lane, per report from Curt Kolovson.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/CANhYJV6HWtgz_qjx_APfK0PAgLUzY-2vjLuj7i_o=TZF1LAQew@mail.gmail.com
This has been missed in cb0cca1, noticed before buildfarm member koel
has been able to complain while poking at a different patch. Like the
other commit, backpatch all the way down to limit the odds of merge
conflicts.
Backpatch-through: 11
A crash in the middle of a checkpoint with some two-phase state data
already flushed to disk by this checkpoint could cause a follow-up crash
recovery to recover twice the same transaction, once from what has been
found in pg_twophase/ at the beginning of recovery and a second time
when replaying its corresponding record.
This would lead to FATAL failures in the startup process during
recovery, where the same transaction would have a state recovered twice
instead of once:
LOG: recovering prepared transaction 731 from shared memory
LOG: recovering prepared transaction 731 from shared memory
FATAL: lock ExclusiveLock on object 731/0/0 is already held
This issue is fixed by skipping the addition of any 2PC state coming
from a record whose equivalent 2PC state file has already been loaded in
TwoPhaseState at the beginning of recovery by restoreTwoPhaseData(),
which is OK as long as the system has not reached a consistent state.
The timing to get a messed up recovery processing is very racy, and
would very unlikely happen. The thread that has reported the issue has
demonstrated the bug using injection points to force a PANIC in the
middle of a checkpoint.
Issue introduced in 728bd99, so backpatch all the way down.
Reported-by: "suyu.cmj" <mengjuan.cmj@alibaba-inc.com>
Author: "suyu.cmj" <mengjuan.cmj@alibaba-inc.com>
Author: Michael Paquier
Discussion: https://postgr.es/m/109e6994-b971-48cb-84f6-829646f18b4c.mengjuan.cmj@alibaba-inc.com
Backpatch-through: 11