RelationSyncCache, the hash table in charge of tracking the relation
schemas sent through pgoutput, was forgetting to free the TupleDesc
associated to the two slots used to store the new and old tuples,
causing some memory to be leaked each time a relation is invalidated
when the slots of an existing relation entry are cleaned up.
This is rather hard to notice as the bloat is pretty minimal, but a
long-running WAL sender would be in trouble over time depending on the
workload. sysbench has proved to be pretty good at showing the problem,
coupled with some memory monitoring of the WAL sender.
Issue introduced in 52e4f0cd47, that has added row filters for tables
logically replicated.
Author: Boyu Yang
Reviewed-by: Michael Paquier, Hou Zhijie
Discussion: https://postgr.es/m/DM3PR84MB3442E14B340E553313B5C816E3252@DM3PR84MB3442.NAMPRD84.PROD.OUTLOOK.COM
Backpatch-through: 15
Spell out how a = key associated with a SAOP array renders a > key
against the same index column redundant at the relevant point inside
_bt_preprocess_keys.
Follow-up to commit 5bf748b8.
Ordinarily transformSetOperationTree will collect all UNION/
INTERSECT/EXCEPT steps into the setOperations tree of the topmost
Query, so that leaf queries do not contain any setOperations.
However, it cannot thus flatten a subquery that also contains
WITH, ORDER BY, FOR UPDATE, or LIMIT. I (tgl) forgot that in
commit 07b4c48b6 and wrote an assertion in rule deparsing that
a leaf's setOperations would always be empty.
If it were nonempty then we would want to parenthesize the subquery
to ensure that the output represents the setop nesting correctly
(e.g. UNION below INTERSECT had better get parenthesized). So
rather than just removing the faulty Assert, let's change it into
an additional case to check to decide whether to add parens. We
don't expect that the additional case will ever fire, but it's
cheap insurance.
Man Zeng and Tom Lane
Discussion: https://postgr.es/m/tencent_7ABF9B1F23B0C77606FC5FE3@qq.com
Commit 4ac2a9bece introduced the REJECT_LIMIT option for the COPY
command. This commit extends the support for this option to file_fdw.
As well as REJECT_LIMIT option for COPY, this option limits
the maximum number of erroneous rows that can be skipped.
If the number of data type conversion errors exceeds this limit,
accessing the file_fdw foreign table will fail with an error,
even when on_error = 'ignore' is specified.
Since the CREATE/ALTER FOREIGN TABLE commands require foreign
table options to be single-quoted, this commit updates
defGetCopyRejectLimitOption() to handle also string value for them,
in addition to int64 value for COPY command option.
Author: Atsushi Torikoshi
Reviewed-by: Fujii Masao, Yugo Nagata, Kirill Reshke
Discussion: https://postgr.es/m/bab68a9fc502b12693f0755b6f35f327@oss.nttdata.com
In the dim past we figured it was okay to ignore collations
when combining UNION set-operation nodes into a single N-way
UNION operation. I believe that was fine at the time, but
it stopped being fine when we added nondeterministic collations:
the semantics of distinct-ness are affected by those. v17 made
it even less fine by allowing per-child sorting operations to
be merged via MergeAppend, although I think we accidentally
avoided any live bug from that.
Add a check that collations match before deciding that two
UNION nodes are equivalent. I also failed to resist the
temptation to comment plan_union_children() a little better.
Back-patch to all supported branches (v13 now), since they
all have nondeterministic collations.
Discussion: https://postgr.es/m/3605568.1731970579@sss.pgh.pa.us
Previously, database object statistics manipulation functions like
pg_set_relation_stats() reported unclear error and hint messages
when executed during recovery. These messages were "internal",
making it difficult for users to understand the issue:
ERROR: cannot acquire lock mode ShareUpdateExclusiveLock on database objects while recovery is in progress
HINT: Only RowExclusiveLock or less can be acquired on database objects during recovery.
This commit updates the error handling so that, if these functions
are called during recovery, they produce clearer messages:
ERROR: recovery is in progress
HINT: Statistics cannot be modified during recovery.
The related documentation has also been updated to explicitly
clarify that these functions are not available during recovery.
Author: Fujii Masao
Reviewed-by: Heikki Linnakangas, Maxim Orlov
Discussion: https://postgr.es/m/6d313829-5f56-4a28-ae4b-bd01bf1ae791@oss.nttdata.com
The error message showing up when parameters or keywords include too
many whitespaces was "trailing data found", which was confusing because
there was no hint about what was actually wrong.
Issue introduced in 430ce189fc, hence there is no need for a
backpatch.
Author: Yushi Ogiwara
Reviewed-by: Fujii Masao, Tom Lane, Bruce Momjian
Discussion: https://postgr.es/m/645bd22a53c4da8a1bc7e1e52d9d3b52@oss.nttdata.com
Relying on pg_memory_is_all_zeros(), which would apply SIMD instructions
when dealing with an aligned page, is proving to be at least three times
faster than the original size_t-based comparisons when checking if a
BLCKSZ page is full of zeros. Note that PageIsVerifiedExtended() is
called each time a page is read from disk, and making it faster is a
good thing.
Author: Bertrand Drouvot
Discussion: https://postgr.es/m/CAApHDvq7P-JgFhgtxUPqhavG-qSDVUhyWaEX9M8_MNorFEijZA@mail.gmail.com
pg_memory_is_all_zeros() is currently implemented to do only a
byte-per-byte comparison. While being sufficient for its existing
callers for pgstats entries, it could lead to performance regressions
should it be used for larger memory areas, like 8kB blocks, or even
future commits.
This commit optimizes the implementation of this function to be more
efficient for larger sizes, written in a way so that compilers can
optimize the code. This is portable across 32b and 64b architectures.
The implementation handles three cases, depending on the size of the
input provided:
* If less than sizeof(size_t), do a simple byte-by-byte comparison.
* If between sizeof(size_t) and (sizeof(size_t) * 8 - 1):
** Phase 1: byte-by-byte comparison, until the pointer is aligned.
** Phase 2: size_t comparisons, with aligned pointers, up to the last
aligned location possible.
** Phase 3: byte-by-byte comparison, until the end location.
* If more than (sizeof(size_t) * 8) bytes, this is the same as case 2
except that an additional phase is placed between Phase 1 and Phase 2,
with 8 * sizeof(size_t) comparisons using bitwise OR, to encourage
compilers to use SIMD instructions if available.
The last improvement proves to be at least 3 times faster than the
size_t comparisons, which is something currently used for the all-zero
page check in PageIsVerifiedExtended().
The optimization tricks that would encourage the use of SIMD
instructions have been suggested by David Rowley.
Author: Bertrand Drouvot
Reviewed-by: Michael Paquier, Ranier Vilela
Discussion: https://postgr.es/m/CAApHDvq7P-JgFhgtxUPqhavG-qSDVUhyWaEX9M8_MNorFEijZA@mail.gmail.com
After commit 5a2fed911a, the catalog state
resulting from these commands ceased to affect sessions. Restore the
longstanding behavior, which is like beginning the session with a SET
ROLE command. If cherry-picking the CVE-2024-10978 fixes, default to
including this, too. (This fixes an unintended side effect of fixing
CVE-2024-10978.) Back-patch to v12, like that commit. The release team
decided to include v12, despite the original intent to halt v12 commits
earlier this week.
Tom Lane and Noah Misch. Reported by Etienne LAFARGE.
Discussion: https://postgr.es/m/CADOZwSb0UsEr4_UTFXC5k7=fyyK8uKXekucd+-uuGjJsGBfxgw@mail.gmail.com
Previously LogicalIncreaseRestartDecodingForSlot() accidentally
accepted any LSN as the candidate_lsn and candidate_valid after the
restart_lsn of the replication slot was updated, so it potentially
caused the restart_lsn to move backwards.
A scenario where this could happen in logical replication is: after a
logical replication restart, based on previous candidate_lsn and
candidate_valid values in memory, the restart_lsn advances upon
receiving a subscriber acknowledgment. Then, logical decoding restarts
from an older point, setting candidate_lsn and candidate_valid based
on an old RUNNING_XACTS record. Subsequent subscriber acknowledgments
then update the restart_lsn to an LSN older than the current value.
In the reported case, after WAL files were removed by a checkpoint,
the retreated restart_lsn prevented logical replication from
restarting due to missing WAL segments.
This change essentially modifies the 'if' condition to 'else if'
condition within the function. The previous code had an asymmetry in
this regard compared to LogicalIncreaseXminForSlot(), which does
almost the same thing for different fields.
The WAL removal issue was reported by Hubert Depesz Lubaczewski.
Backpatch to all supported versions, since the bug exists since 9.4
where logical decoding was introduced.
Reviewed-by: Tomas Vondra, Ashutosh Bapat, Amit Kapila
Discussion: https://postgr.es/m/Yz2hivgyjS1RfMKs%40depesz.com
Discussion: https://postgr.es/m/85fff40e-148b-4e86-b921-b4b846289132%40vondra.me
Backpatch-through: 13
In commit 08c0d6ad6 which introduced "rainbow" arcs in regex NFAs,
I didn't think terribly hard about what to do when creating the color
complement of a rainbow arc. Clearly, the complement cannot match any
characters, and I took the easy way out by just not building any arcs
at all in the complement arc set. That mostly works, but Nikolay
Shaplov found a case where it doesn't: if we decide to delete that
sub-NFA later because it's inside a "{0}" quantifier, delsub()
suffered an assertion failure. That's because delsub() relies on
the target sub-NFA being fully connected. That was always true
before, and the best fix seems to be to restore that property.
Hence, invent a new arc type CANTMATCH that can be generated in
place of an empty color complement, and drop it again later when we
start NFA optimization. (At that point we don't need to do delsub()
any more, and besides there are other cases where NFA optimization can
lead to disconnected subgraphs.)
It appears that this bug has no consequences in a non-assert-enabled
build: there will be some transiently leaked NFA states/arcs, but
they'll get cleaned up eventually. Still, we don't like assertion
failures, so back-patch to v14 where rainbow arcs were introduced.
Per bug #18708 from Nikolay Shaplov.
Discussion: https://postgr.es/m/18708-f94f2599c9d2c005@postgresql.org
Allowing foreign keys where the referenced and the referencing columns
have collations with different notions of equality is problematic.
This can only happen when using nondeterministic collations, for
example, if the referencing column is case-insensitive and the
referenced column is not, or vice versa. It does not happen if both
collations are deterministic.
To show one example:
CREATE COLLATION case_insensitive (provider = icu, deterministic = false, locale = 'und-u-ks-level2');
CREATE TABLE pktable (x text COLLATE "C" PRIMARY KEY);
CREATE TABLE fktable (x text COLLATE case_insensitive REFERENCES pktable ON UPDATE CASCADE ON DELETE CASCADE);
INSERT INTO pktable VALUES ('A'), ('a');
INSERT INTO fktable VALUES ('A');
BEGIN; DELETE FROM pktable WHERE x = 'a'; TABLE fktable; ROLLBACK;
BEGIN; DELETE FROM pktable WHERE x = 'A'; TABLE fktable; ROLLBACK;
Both of these DELETE statements delete the one row from fktable. So
this means that one row from fktable references two rows in pktable,
which should not happen. (That's why a primary key or unique
constraint is required on pktable.)
When nondeterministic collations were implemented, the SQL standard
available to yours truly said that referential integrity checks should
be performed with the collation of the referenced column, and so
that's how we implemented it. But this turned out to be a mistake in
the SQL standard, for the same reasons as above, that was later
(SQL:2016) fixed to require both collations to be the same. So that's
what we are aiming for here.
We don't have to be quite so strict. We can allow different
collations if they are both deterministic. This is also good for
backward compatibility.
So the new rule is that the collations either have to be the same or
both deterministic. Or in other words, if one of them is
nondeterministic, then both have to be the same.
Users upgrading from before that have affected setups will need to
make changes to their schemas (i.e., change one or both collations in
affected foreign-key relationships) before the upgrade will succeed.
Some of the nice test cases for the previous situation in
collate.icu.utf8.sql are now obsolete. They are changed to just check
the error checking of the new rule. Note that collate.sql already
contained a test for foreign keys with different deterministic
collations.
A bunch of code in ri_triggers.c that added a COLLATE clause to
enforce the referenced column's collation can be removed, because both
columns now have to have the same notion of equality, so it doesn't
matter which one to use.
Reported-by: Paul Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Jian He <jian.universality@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/78d824e0-b21e-480d-a252-e4b84bc2c24b@illuminatedcomputing.com
Previously, in unlucky cases, it was possible for pg_rewind to remove
certain WAL segments from the rewound demoted primary. In particular
this happens if those files have been marked for archival (i.e., their
.ready files were created) but not yet archived; the newly promoted node
no longer has such files because of them having been recycled, but they
are likely critical for recovery in the demoted node. If pg_rewind
removes them, recovery is not possible anymore.
Fix this by maintaining a hash table of files in this situation in the
scan that looks for a checkpoint, which the decide_file_actions phase
can consult so that it knows to preserve them.
Backpatch to 14. The problem also exists in 13, but that branch was not
blessed with commit eb00f1d4bf, so this patch is difficult to apply
there. Users of older releases will just have to continue to be extra
careful when rewinding.
Co-authored-by: Полина Бунгина (Polina Bungina) <bungina@gmail.com>
Co-authored-by: Alexander Kukushkin <cyberdemn@gmail.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Atsushi Torikoshi <torikoshia@oss.nttdata.com>
Discussion: https://postgr.es/m/CAAtGL4AhzmBRsEsaDdz7065T+k+BscNadfTqP1NcPmsqwA5HBw@mail.gmail.com
Refactor objectNamesToOids() to use get_object_address() internally if
possible. Not only does this save a lot of code, it also allows us to
use the object locking provided by get_object_address() for
GRANT/REVOKE. There was previously a code comment that complained
about the lack of locking in objectNamesToOids(), which is now fixed.
The check in ExecGrant_Type_check() is obsolete because
get_object_address_type() already does the same check.
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/bf72b82c-124d-4efa-a484-bb928e9494e4@eisentraut.org
In EEOP_BOOL_AND_STEP* and EEOP_BOOL_OR_STEP*, we emitted pointlesss
store instructions to store to resnull/resvalue values that were just
loaded from the same fields in the previous instructions. They will
surely get optimized away by LLVM if any optimizations are enabled,
but it's better to not emit them in the first place. In
EEOP_BOOL_NOT_STEP, similar story with resnull.
In EEOP_NULLIF, when it returns NULL, there was also a redundant store
to resvalue just after storing a 0 to it. The value of resvalue
doesn't matter when resnull is set, so in fact even storing the 0 is
unnecessary, but I kept that because we tend to do that for general
tidiness.
Author: Xing Guo <higuoxing@gmail.com>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Discussion: https://www.postgresql.org/message-id/CACpMh%2BC%3Dg13WdvzLRSponsVWGgxwDSMzQWM4Gz0heOyaA0-N6g@mail.gmail.com
This fixes a set of race conditions with cumulative statistics where a
shared stats entry could be dropped while it should still be valid in
the event when it is reused: an entry may refer to a different object
but requires the same hash key. This can happen with various stats
kinds, like:
- Replication slots that compute internally an index number, for
different slot names.
- Stats kinds that use an OID in the object key, where a wraparound
causes the same key to be used if an OID is used for the same object.
- As of PostgreSQL 18, custom pgstats kinds could also be an issue,
depending on their implementation.
This issue is fixed by introducing a counter called "generation" in the
shared entries via PgStatShared_HashEntry, initialized at 0 when an
entry is created and incremented when the same entry is reused, to avoid
concurrent issues on drop because of other backends still holding a
reference to it. This "generation" is copied to the local copy that a
backend holds when looking at an object, then cross-checked with the
shared entry to make sure that the entry is not dropped even if its
"refcount" justifies that if it has been reused.
This problem could show up when a backend shuts down and needs to
discard any entries it still holds, causing statistics to be removed
when they should not, or even an assertion failure. Another report
involved a failure in a standby after an OID wraparound, where the
startup process would FATAL on a "can only drop stats once", stopping
recovery abruptly. The buildfarm has been sporadically complaining
about the problem, as well, but the window is hard to reach with the
in-core tests.
Note that the issue can be reproduced easily by adding a sleep before
dshash_find() in pgstat_release_entry_ref() to enlarge the problematic
window while repeating test_decoding's isolation test oldest_xmin a
couple of times, for example, as pointed out by Alexander Lakhin.
Reported-by: Alexander Lakhin, Peter Smith
Author: Kyotaro Horiguchi, Michael Paquier
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/CAA4eK1KxuMVyAryz_Vk5yq3ejgKYcL6F45Hj9ZnMNBS-g+PuZg@mail.gmail.com
Discussion: https://postgr.es/m/17947-b9554521ad963c9c@postgresql.org
Backpatch-through: 15
Previously, only backends, autovacuum workers, and background workers
had an entry in the PMChildFlags array. With this commit, all
postmaster child processes, including all the aux processes, have an
entry. Dead-end backends still don't get an entry, though, and other
processes that don't touch shared memory will never mark their
PMChildFlags entry as active.
We now maintain separate freelists for different kinds of child
processes. That ensures that there are always slots available for
autovacuum and background workers. Previously, pre-authentication
backends could prevent autovacuum or background workers from starting
up, by using up all the slots.
The code to manage the slots in the postmaster process is in a new
pmchild.c source file. Because postmaster.c is just so large.
Assigning pmsignal slot numbers is now pmchild.c's responsibility.
This replaces the PMChildInUse array in pmsignal.c.
Some of the comments in postmaster.c still talked about the "stats
process", but that was removed in commit 5891c7a8ed. Fix those while
we're at it.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/a102f15f-eac4-4ff2-af02-f9ff209ec66f@iki.fi
Previously, the postmaster would never try to kill dead-end child
processes, even if there were no other processes left. A dead-end
backend will eventually exit, when authentication_timeout expires, but
if a dead-end backend is the only thing that's preventing the server
from shutting down, it seems better to kill it immediately. It's
particularly important, if there was a bug in the early startup code
that prevented a dead-end child from timing out and exiting normally.
Includes a test for that case where a dead-end backend previously
prevented the server from shutting down.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/a102f15f-eac4-4ff2-af02-f9ff209ec66f@iki.fi
These two functions with their different argument lists have never been
tested in this module, so let's add something.
Author: Ronan Dunklau
Discussion: https://postgr.es/m/ZzMSJkiNZhimjXWx@paquier.xyz
Because in the 18 cycle we turned checksums on by default with commit
04bec894a0, and pg_upgrade fails if the setting doesn't match in old
and new clusters, the built-in cross-version pg_upgrade test is failing
if the old version is older than 18. Fix the script so that it creates
the old cluster with checksums enabled (-k) in cross-version scenarios.
This went unnoticed because the buildfarm doesn't use the same test code
for cross-version testing.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/202411071838.7fgkb7uvavvz@alvherre.pgsql
InjectionPointEntry->name was described as a hash key, which was fine
when introduced in d86d20f0ba, but it is not now.
Oversight in 86db52a506, that has changed the way injection points are
stored in shared memory from a hash table to an array.
Backpatch-through: 17
Multiple buildfarm animals warn that a newly added Assert() is
impossible to fail; remove it to avoid the noise. While at it, use
direct assignment to obtain the value we need, avoiding an unnecessary
memcpy().
(I decided to remove the "pfree" call for the detoasted short-datum;
because this is only used for DDL, it's not problematic to leak such a
small allocation.)
Noted by Tom Lane about 14e87ffa5c.
Discussion: https://postgr.es/m/3649828.1731083171@sss.pgh.pa.us