When performing a bitmap heap scan, we don't want to miss concurrent
writes that occurred after we observed the heap's rs_nblocks, but before
we took predicate locks on index pages. Therefore, we can't skip
fetching any heap tuples that are referenced by the index, because we
need to test them all with CheckForSerializableConflictOut(). The
old optimization that would ignore any references to blocks >=
rs_nblocks gets in the way of that requirement, because it means that
concurrent writes in that window are ignored.
Removing that optimization shouldn't affect correctness at any isolation
level, because any new tuples shouldn't be visible to an MVCC snapshot.
There also shouldn't be any error-causing references to heap blocks past
the end, because we should have held at least an AccessShareLock on the
table before the index scan. It can't get smaller while our transaction
is running. For now, though, we'll keep the optimization at lower
levels to avoid making unnecessary changes in a bug fix.
Back-patch to all supported releases. In release 11, the code is in a
different place but not fundamentally different. Fixes one aspect of
bug #17949.
Reported-by: Artem Anisimov <artem.anisimov.255@gmail.com>
Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/17949-a0f17035294a55e2%40postgresql.org
When predicate-locking btrees, we have a special case for completely
empty btrees, since there is no page to lock. This was racy, because,
without buffer lock held, a matching key could be inserted between the
_bt_search() and the PredicateLockRelation() calls.
Fix, by rechecking _bt_search() after taking the relation-level SIREAD
lock, if using SERIALIZABLE isolation and an empty btree is discovered.
Back-patch to all supported releases. Fixes one aspect of bug #17949.
Reported-by: Artem Anisimov <artem.anisimov.255@gmail.com>
Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/17949-a0f17035294a55e2%40postgresql.org
Creation of a file with a very long name can create problems on Windows
due to its file path limits. Work around that by creating the file via a
symlink with a shorter name.
Error displayed by buildfarm animal fairywren.o
Backpatch to all live branches
When set, this environment variable was only effective for data
directories but not for all the other temporary files created by
PostgreSQL::Test::Utils. Keeping the temporary files after a successful
run can be useful for debugging purposes.
The documentation is updated to reflect the new behavior, with contents
available in doc/ since v16 and in src/test/perl/README since v15.
Author: Jacob Champion
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/CAAWbhmgHtDH1SGZ+Fw05CsXtE0mzTmjbuUxLB9mY9iPKgM6cUw@mail.gmail.com
Discussion: https://postgr.es/m/YyPd9unV14SX2bLF@paquier.xyz
Backpatch-through: 11
Commit dd38ff28ad added a new error message "missing contrecord" when
we fail to reassemble a record. Unfortunately that caused noisy
messages to be logged by pg_waldump at end of segment, and by walsender
when asked to shut down on a segment boundary.
Remove the new error message, so that this condition signals end-of-
WAL without a message. It's arguably a reportable condition that should
not be silenced while performing crash recovery, but fixing that without
introducing noise in the other cases will require more research.
Back-patch to 15.
Reported-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Discussion: https://postgr.es/m/6a1df56e-4656-b3ce-4b7a-a9cb41df8189%40enterprisedb.com
Commit f24523672d fixed a memory leak by moving the modifiedCols bitmap
into the per-row memory context. In the case of AFTER UPDATE triggers,
the bitmap is however referenced from an event kept until the end of the
query, resulting in a use-after-free bug.
Fixed by copying the bitmap into the AfterTriggerEvents memory context,
which is the one where we keep the trigger events. There's only one
place that needs to do the copy, but the memory context may not exist
yet. Doing that in a separate function seems more readable.
Report by Alexander Pyhalov, fix by me. Backpatch to 13, where the
bitmap was added to the event by commit 71d60e2aa0.
Reported-by: Alexander Pyhalov
Backpatch-through: 13
Discussion: https://postgr.es/m/acddb17c89b0d6cb940eaeda18c08bbe@postgrespro.ru
The Incremental Sort had a couple issues, resulting in leaking memory
during rescans, possibly triggering OOM. The code had a couple of
related flaws:
1. During rescans, the sort states were reset but then also set to NULL
(despite the comment saying otherwise). ExecIncrementalSort then
sees NULL and initializes a new sort state, leaking the memory used
by the old one.
2. Initializing the sort state also automatically rebuilt the info about
presorted keys, leaking the already initialized info. presorted_keys
was also unnecessarily reset to NULL.
Patch by James Coleman, based on patches by Laurenz Albe and Tom Lane.
Backpatch to 13, where Incremental Sort was introduced.
Author: James Coleman, Laurenz Albe, Tom Lane
Reported-by: Laurenz Albe, Zu-Ming Jiang
Backpatch-through: 13
Discussion: https://postgr.es/m/b2bd02dff61af15e3526293e2771f874cf2a3be7.camel%40cybertec.at
Discussion: https://postgr.es/m/db03c582-086d-e7cd-d4a1-3bc722f81765%40inf.ethz.ch
The logic that introduced partitioned indexes missed a few things when
invalidating a partitioned index when these are created, still the code
is written to handle recursions:
1) If created from scratch because a mapping index could not be found,
the new index created could be itself invalid, if for example it was a
partitioned index with one of its leaves invalid.
2) A CCI was missing when indisvalid is set for a parent index, leading
to inconsistent trees when recursing across more than one level for a
partitioned index creation if an invalidation of the parent was
required.
This could lead to the creation of a partition index tree where some of
the partitioned indexes are marked as invalid, but some of the parents
are marked valid, which is not something that should happen (as
validatePartitionedIndex() defines, indisvalid is switched to true for a
partitioned index iff all its partitions are themselves valid).
This patch makes sure that indisvalid is set to false on a partitioned
index if at least one of its partition is invalid. The flag is set to
true if *all* its partitions are valid.
The regression test added in this commit abuses of a failed concurrent
index creation, marked as invalid, that maps with an index created on
its partitioned table afterwards.
Reported-by: Alexander Lakhin
Reviewed-by: Alexander Lakhin
Discussion: https://postgr.es/m/14987634-43c0-0cb3-e075-94d423607e08@gmail.com
Backpatch-through: 11
ALTER TABLE .. SET ACCESS METHOD was not registering a dependency to the
new access method with the relation altered in its rewrite phase, making
possible the drop of an access method even if there are relations that
depend on it. During the rewrite, a temporary relation is created to
build the new relation files before swapping the new and old files, and,
while the temporary relation was registering a correct dependency to the
new AM, the old relation did not do that. A dependency on the access
method is added when the relation files are swapped, which is the point
where pg_class is updated.
Materialized views and tables use the same code path, hence both were
impacted.
Backpatch down to 15, where this command has been introduced.
Reported-by: Alexander Lakhin
Reviewed-by: Nathan Bossart, Andres Freund
Discussion: https://postgr.es/m/18000-9145c25b1af475ca@postgresql.org
Backpatch-through: 15
If the given composite datum is toasted out-of-line,
DatumGetHeapTupleHeader will perform database accesses to detoast it.
That can invalidate the result of get_cached_rowtype, as documented
(perhaps not plainly enough) in that function's API spec; which leads
to strange errors or crashes when we try to use the TupleDesc to read
the tuple. In short then, trying to update a field of a composite
column could fail intermittently if the overall column value is wide
enough to require toasting.
We can fix the bug at no cost by just changing the order of
operations, since we don't need the TupleDesc until after detoasting.
(Other callers of get_cached_rowtype appear to get this right already,
so there's only one bug.)
Note that the added regression test case reveals this bug reliably
only with debug_discard_caches/CLOBBER_CACHE_ALWAYS.
Per bug #17994 from Alexander Lakhin. Sadly, this patch does not fix
the missing-values issue revealed in the bug discussion; we'll need
some more work to cover that.
Discussion: https://postgr.es/m/17994-5c7100b51b4790e9@postgresql.org
It was walking into the ColumnDef->compression field, which is not a
node but a string. This code is currently not reachable (because the
compression field is only set in situations that don't go through
raw_expression_tree_walker()), but if it had been, this could have
behaved erratically.
A portion of ALTER TABLE .. ATTACH PARTITION is to ensure that the
partition being attached to the partitioned table has a correct set of
indexes, so as there is a consistent index mapping between the
partitioned table and its new-to-be partition. However, as introduced
in 8b08f7d, the current logic could choose an invalid index as a match,
which is something that can exist when dealing with more than two levels
of partitioning, like attaching a partitioned table (that has
partitions, with an index created by CREATE INDEX ON ONLY) to another
partitioned table.
A partitioned index with indisvalid set to false is equivalent to an
incomplete partition tree, meaning that an invalid partitioned index
does not have indexes defined in all its partitions. Hence, choosing an
invalid partitioned index can create inconsistent partition index trees,
where the parent attaching to is valid, but its partition may be
invalid.
In the report from Alexander Lakhin, this showed up as an assertion
failure when validating an index. Without assertions enabled, the
partition index tree would be actually broken, as indisvalid should
be switched to true for a partitioned index once all its partitions are
themselves valid. With two levels of partitioning, the top partitioned
table used a valid index and was able to link to an invalid index stored
on its partition, itself a partitioned table.
I have studied a few options here (like the possibility to switch
indisvalid to false for the parent), but came down to the conclusion
that we'd better rely on a simple rule: invalid indexes had better never
be chosen, so as the partition attached uses and creates indexes that
the parent expects. Some regression tests are added to provide some
coverage. Note that the existing coverage is not impacted.
This is a problem since partitioned indexes exist, so backpatch all the
way down to v11.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/14987634-43c0-0cb3-e075-94d423607e08@gmail.com
Backpatch-through: 11
TParserGet() recurses for some token types, meaning it's possible
to drive it to stack overflow. Since this is a minority behavior,
I chose to add the check_stack_depth() call to the two places that
recurse rather than doing it during every single call.
While at it, add CHECK_FOR_INTERRUPTS(), because this can run
unpleasantly long for long inputs.
Per bug #17995 from Zuming Jiang. This is old, so back-patch
to all supported branches.
Discussion: https://postgr.es/m/17995-9f20ff3e6389db4c@postgresql.org
One of the tests for the pipeline mode with portal description expects a
non-NULL PQgetResult, but used an incorrect error message on failure,
telling that PQgetResult being NULL was the expected result.
Author: Jelte Fennema
Discussion: https://postgr.es/m/CAGECzQTkShHecFF+EZrm94Lbsu2ej569T=bz+PjMbw9Aiioxuw@mail.gmail.com
Backpatch-through: 14
Avoid "right sibling %u of block %u is not next child" errors when
vacuuming a corrupt nbtree index. Just LOG the issue and press on.
That way VACUUM will have a decent chance of finishing off all required
processing for the index (and for the table as a whole).
This is similar to recent work from commit 5abff197, as well as work
from commit 5b861baa (later backpatched as commit 43e409ce), which
taught nbtree VACUUM to keep going when its "re-find" check fails. The
hardening added by this commit takes place directly after the "re-find"
check, right before the critical section for the first stage of page
deletion.
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wz=dayg0vjs4+er84TS9ami=csdzjpuiCGbEw=idhwqhzQ@mail.gmail.com
Backpatch: 11- (all supported versions).
exec_parse_message() wants to create a cached plan in all cases,
including for empty input. The empty-input path does not have
a test for being in an aborted transaction, making it possible
that plancache.c will fail due to trying to do database lookups
even though there's no real work to do.
One solution would be to throw an aborted-transaction error in
this path too, but it's not entirely clear whether the lack of
such an error was intentional or whether some clients might be
relying on non-error behavior. Instead, let's hack plancache.c
so that it treats empty statements with the same logic it
already had for transaction control commands, ensuring that it
can soldier through even in an already-aborted transaction.
Per bug #17983 from Alexander Lakhin. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/17983-da4569fcb878672e@postgresql.org
This partially reverts 68cb5af, as using archiving to enforce the
rename of the last partial segment of the old timeline at promotion to
use .partial as suffix is impacting the tests when it does switchovers.
As showed by the logs gathered by the CI in the tests that failed, a new
standby may fail to find the WAL segment it needs to follow a promoted
instance with its timeline jump, as it got renamed to .partial.
This problem would manifest as a run timeout with 009_twophase.pl, as
the new standby repeatedly requests a segment from the promoted primary
that it would not find.
Reported-by: Nathan Bossart
Discussion: https://postgr.es/m/20230621043345.GA787473@nathanxps13
Backpatch-through: 13
The existing errhint message and docs were missing the fact that we can't
disassociate from the slot unless the subscription is disabled.
Author: Robert Sjöblom, Peter Smith
Reviewed-by: Peter Eisentraut, Amit Kapila
Backpatch-through: 11
Discussion: https://postgr.es/m/807bdf85-61ea-88e2-5712-6d9fcd4eabff@fortnox.se
If the inner-side expressions contain PARAM_EXEC Params, we must
re-hash whenever the values of those Params change. The executor
mechanism for that exists already, but we failed to invoke it because
finalize_plan() neglected to search the Hash.hashkeys field for
Params. This allowed a previous scan's hash table to be re-used
when it should not be, leading to rows missing from the join's output.
(I believe incorrectly-included join rows are impossible however,
since checking the real hashclauses would reject false matches.)
This bug is very ancient, dating probably to d24d75ff1 of 7.4.
Sadly, this simple fix depends on the plan representational changes
made by 2abd7ae9b, so it will only work back to v12. I thought
about trying to make some kind of hack for v11, but I'm leery
of putting code significantly different from what is used in the
newer branches into a nearly-EOL branch. Seeing that the bug
escaped detection for a full twenty years, problematic cases
must be rare; so I don't feel too awful about leaving v11 as-is.
Per bug #17985 from Zuming Jiang. Back-patch to v12.
Discussion: https://postgr.es/m/17985-748b66607acd432e@postgresql.org
This is a follow-up of f663b00, that has been committed to v13 and v14,
tweaking the TAP test for two-phase transactions so as it provides
coverage for the bug that has been fixed. This change is done in its
own commit for clarity, as v15 and HEAD did not show the problematic
behavior, still missed coverage for it.
While on it, this adds a comment about the dependency of the last
partial segment rename and RecoverPreparedTransactions() at the end of
recovery, as that can be easy to miss.
Author: Michael Paquier
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/743b9b45a2d4013bd90b6a5cba8d6faeb717ee34.camel@cybertec.at
Backpatch-through: 13
Here we adjust relation_has_unique_index_for() so that it no longer makes
use of partial unique indexes as uniqueness proofs. It is incorrect to
use these as the predicates used by check_index_predicates() to set
predOK makes use of not only baserestrictinfo quals as proofs, but also
qual from join conditions. For relation_has_unique_index_for()'s case, we
need to know the relation is unique for a given set of columns before any
joins are evaluated, so if predOK was only set to true due to some join
qual, then it's unsafe to use such indexes in
relation_has_unique_index_for(). The final plan may not even make use
of that index, which could result in reading tuples that are not as
unique as the planner previously expected them to be.
Bug: #17975
Reported-by: Tor Erik Linnerud
Backpatch-through: 11, all supported versions
Discussion: https://postgr.es/m/17975-98a90c156f25c952%40postgresql.org
rewriteRuleAction neglected to check for SubLink nodes in the
securityQuals of range table entries. This could lead to failing
to convert such a SubLink to a SubPlan, resulting in assertion
crashes or weird errors later in planning.
In passing, fix some poor coding in rewriteTargetView:
we should not pass the source parsetree's hasSubLinks
field to ReplaceVarsFromTargetList's outer_hasSubLinks.
ReplaceVarsFromTargetList knows enough to ignore that
when a Query node is passed, but it's still confusing
and bad precedent: if we did try to update that flag
we'd be updating a stale copy of the parsetree.
Per bug #17972 from Alexander Lakhin. This has been broken since
we added RangeTblEntry.securityQuals (although the presented test
case only fails back to 215b43cdc), so back-patch all the way.
Discussion: https://postgr.es/m/17972-f422c094237847d0@postgresql.org
Commit 927d9abb6 purported to make datetime() accept any string
that could be output for a datetime value by to_jsonb(). But it
overlooked the possibility of fractional seconds being present,
so that cases as simple as to_jsonb(now()) would defeat it.
Fix by adding formats that include ".US" to the list in
executeDateTimeMethod(). (Note that while this is nominally
microseconds, it'll do the right thing for fractions with
fewer than six digits.)
In passing, re-order the list to restore the datatype ordering
specified in its comment. The violation accidentally did not
break anything; but the next edit might be less lucky, so add
more comments.
Per report from Tim Field. Back-patch to v13 where datetime()
was added, like the previous patch.
Discussion: https://postgr.es/m/014A028B-5CE6-4FDF-AC24-426CA6FC9CEE@mohiohio.com
The same routine to check if a specific pattern can be found in the
server logs was copied over four different test scripts. This refactors
the whole to use a single routine located in PostgreSQL::Test::Cluster,
named log_contains, to grab the contents of the server logs and check
for a specific pattern.
On HEAD, the code previously used assumed that slurp_file() could not
handle an undefined offset, setting it to zero, but slurp_file() does
do an extra fseek() before retrieving the log contents only if an offset
is defined. In two places, the test was retrieving the full log
contents with slurp_file() after calling substr() to apply an offset,
ignoring that slurp_file() would be able to handle that.
Backpatch all the way down to ease the introduction of new tests that
could rely on the new routine.
Author: Vignesh C
Reviewed-by: Andrew Dunstan, Dagfinn Ilmari Mannsåker, Michael Paquier
Discussion: https://postgr.es/m/CALDaNm0YSiLpjCmajwLfidQrFOrLNKPQir7s__PeVvh9U3uoTQ@mail.gmail.com
Backpatch-through: 11
This commit refactors a bit the code in charge of checking for log
patterns when connections fail or succeed, by moving the log pattern
checks into their own routine, for clarity. This has come up as
something to improve while discussing the refactoring of find_in_log().
Backpatch down to 14 where these routines are used, to ease the
introduction of new tests that could rely on them.
Author: Vignesh C, Michael Paquier
Discussion: https://postgr.es/m/CALDaNm0YSiLpjCmajwLfidQrFOrLNKPQir7s__PeVvh9U3uoTQ@mail.gmail.com
Backpatch-through: 14
Commit fc22b6623b (generated columns) replaced ExecGetUpdatedCols() with
ExecGetAllUpdatedCols() in a couple places handling UPDATE (triggers and
lock mode). However, ExecGetUpdatedCols() did exec_rt_fetch() while
ExecGetAllUpdatedCols() also allocates memory through bms_union()
without paying attention to the memory context and happened to use the
long-lived ExecutorState, leaking the memory until the end of the query.
The amount of leaked memory is proportional to the number of (updated)
attributes, types of UPDATE triggers, and the number of processed rows
(which for UPDATE ... FROM ... may be much higher than updated rows).
Fixed by switching to the per-tuple context in GetAllUpdatedColumns().
This is fine for all in-core callers, but external callers may need to
copy the result. But we're not aware of any such callers.
Note the issue was introduced by fc22b6623b, but the macros were later
renamed by f50e888990.
Backpatch to 12, where the issue was introduced.
Reported-by: Tomas Vondra
Reviewed-by: Andres Freund, Tom Lane, Jakub Wartak
Backpatch-through: 12
Discussion: https://postgr.es/m/222a3442-7f7d-246c-ed9b-a76209d19239@enterprisedb.com
A new-style SQL function can contain a parse-time dependency
on a unique index, much as views and matviews can (such cases
arise from GROUP BY and ON CONFLICT clauses, for example).
To dump and restore such a function successfully, pg_dump must
postpone the function until after the unique index is created,
which will happen in the post-data part of the dump. Therefore
we have to remove the normal constraint that functions are
dumped in pre-data. Add code similar to the existing logic
that handles this for matviews. I added test cases for both
as well, since code coverage tests showed that we weren't
testing the matview logic.
Per report from Sami Imseih. Back-patch to v14 where
new-style SQL functions came in.
Discussion: https://postgr.es/m/2C1933AB-C2F8-499B-9D18-4AC1882256A0@amazon.com
Two places in pg_dump_sort.c were using pg_log_info() to add
more details to a message printed with pg_log_warning().
This is bad, because at default verbosity level we would
print the warning line but not the details. One should use
pg_log_warning_detail() or pg_log_warning_hint() instead.
Commit 9a374b77f got rid of most such abuses, but unaccountably
missed these.
Noted while studying a bug report from Sami Imseih.
Back-patch to v15 where 9a374b77f came in. (Prior versions
don't have the missing-details misbehavior, for reasons
I didn't bother to track down.)
Discussion: https://postgr.es/m/2C1933AB-C2F8-499B-9D18-4AC1882256A0@amazon.com
Avoid "right sibling's left-link doesn't match" errors when vacuuming a
corrupt nbtree index. Just LOG the issue and press on. That way VACUUM
will have a decent chance of finishing off all required processing for
the index (and for the table as a whole).
This error was seen in the field from time to time (it's more than a
theoretical risk), so giving VACUUM the ability to press on like this
has real value. Nothing short of a REINDEX is expected to fix the
underlying index corruption, so giving up (by throwing an error) risks
making a bad situation far worse. Anything that blocks forward progress
by VACUUM like this might go unnoticed for a long time. This could
eventually lead to a wraparound/xidStopLimit outage.
Note that _bt_unlink_halfdead_page() has always been able to bail on
page deletion when the target page's left sibling page was in an
inconsistent state. It now does the same thing (returns false to back
out of the second phase of deletion) when it notices sibling link
corruption in the target page's right sibling page.
This is similar to the work from commit 5b861baa (later backpatched as
commit 43e409ce), which taught nbtree to press on with vacuuming an
index when page deletion fails to "re-find" a downlink in the target
page's parent page. The "re-find" check seems to make VACUUM bail on
page deletion more often in practice, but there is no reason to take any
chances here.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/CAH2-Wzko2q2kP1+UvgJyP9g0mF4hopK0NtQZcxwvMv9_ytGhkQ@mail.gmail.com
Backpatch: 11- (all supported versions).
The idea of EvalPlanQual is that we replace the query's scan of the
result relation with a single injected tuple, and see if we get a
tuple out, thereby implying that the injected tuple still passes the
query quals. (In join cases, other relations in the query are still
scanned normally.) This logic was not updated when commit 86dc90056
made it possible for a single DML query plan to have multiple result
relations, when the query target relation has inheritance or partition
children. We replaced the output for the current result relation
successfully, but other result relations were still scanned normally;
thus, if any other result relation contained a tuple satisfying the
quals, we'd think the EPQ check passed, even if it did not pass for
the injected tuple itself. This would lead to update or delete
actions getting performed when they should have been skipped due to
a conflicting concurrent update in READ COMMITTED isolation mode.
Fix by blocking all sibling result relations from emitting tuples
during an EvalPlanQual recheck. In the back branches, the fix is
complicated a bit by the need to not change the size of struct
EPQState (else we'd have ABI-breaking changes in offsets in
struct ModifyTableState). Like the back-patches of 3f7836ff6
and 4b3e37993, add a separately palloc'd struct to avoid that.
The logic is the same as in HEAD otherwise.
This is only a live bug back to v14 where 86dc90056 came in.
However, I chose to back-patch the test cases further, on the
grounds that this whole area is none too well tested. I skipped
doing so in v11 though because none of the test applied cleanly,
and it didn't quite seem worth extra work for a branch with only
six months to live.
Per report from Ante Krešić (via Aleksander Alekseev)
Discussion: https://postgr.es/m/CAJ7c6TMBTN3rcz4=AjYhLPD_w3FFT0Wq_C15jxCDn8U4tZnH1g@mail.gmail.com
Commits 681d9e462 et al added a test case in namespace.sql that
implicitly relied on there not being a table "public.abc".
However, the concurrently-run transactions.sql test creates precisely
such a table, so with the right timing you'd get a failure.
Creating a table named as generically as "abc" in a common schema
seems like bad practice, so fix this by changing the name of
transactions.sql's table. (Compare 2cf8c7aa4.)
Marina Polyakova
Discussion: https://postgr.es/m/80d0201636665d82185942e7112257b4@postgrespro.ru
Non-leaf pages of GiST indexes contain key attributes, leaf pages
contain both key and non-key attributes, and gist_page_items() ignored
the handling of non-key attributes. This caused a few problems when
using gist_page_items() on a GiST index with INCLUDE:
- On a non-leaf page, the function would crash.
- On a leaf page, the function would work, but miss to display all the
values for included attributes.
This commit fixes gist_page_items() to handle such cases in a more
appropriate way, and now displays the values of key and non-key
attributes for each item separately in a style consistent with what
ruleutils.c would generate for the attribute list, depending on the page
type dealt with. In a way similar to how a record is displayed, values
would be double-quoted for key or non-key attributes if required.
ruleutils.c did not provide a routine able to control if non-key
attributes should be displayed, so an extended() routine for index
definitions is added to work around the leaf and non-leaf page
differences.
While on it, this commit fixes a third problem related to the amount of
data reported for key attributes. The code originally relied on
BuildIndexValueDescription() (used for error reports on constraints)
that would not print all the data stored in the index but the index
opclass's input type, so this limited the amount of information
available. This switch makes gist_page_items() much cheaper as there is
no need to run ACL checks for each item printed, which is not an issue
anyway as superuser rights are required to execute the functions of
pageinspect. Opclasses whose data cannot be displayed can rely on
gist_page_items_bytea().
The documentation of this function was slightly incorrect for the
output results generated on HEAD and v15, so adjust it on these
branches.
Author: Alexander Lakhin, Michael Paquier
Discussion: https://postgr.es/m/17884-cb8c326522977acb@postgresql.org
Backpatch-through: 14
BRIN indexes did not properly distinguish between summaries for empty
(no rows) and all-NULL ranges, treating them as essentially the same
thing. Summaries were initialized with allnulls=true, and opclasses
simply reset allnulls to false when processing the first non-NULL value.
This however produces incorrect results if the range starts with a NULL
value (or a sequence of NULL values), in which case we forget the range
contains NULL values when adding the first non-NULL value.
This happens because the allnulls flag is used for two separate
purposes - to mark empty ranges (not representing any rows yet) and
ranges containing only NULL values.
Opclasses don't know which of these cases it is, and so don't know
whether to set hasnulls=true. Setting the flag in both cases would make
it correct, but it would also make BRIN indexes useless for queries with
IS NULL clauses. All ranges start empty (and thus allnulls=true), so all
ranges would end up with either allnulls=true or hasnulls=true.
The severity of the issue is somewhat reduced by the fact that it only
happens when adding values to an existing summary with allnulls=true.
This can happen e.g. for small tables (because a summary for the first
range exists for all BRIN indexes), or for tables with large fraction of
NULL values in the indexed columns.
Bulk summarization (e.g. during CREATE INDEX or automatic summarization)
that processes all values at once is not affected by this issue. In this
case the flags were updated in a slightly different way, not forgetting
the NULL values.
To identify empty ranges we use a new flag, stored in an unused bit in
the BRIN tuple header so the on-disk format remains the same. A matching
flag is added to BrinMemTuple, into a 3B gap after bt_placeholder.
That means there's no risk of ABI breakage, although we don't actually
pass the BrinMemTuple to any public API.
We could also skip storing index tuples for empty summaries, but then
we'd have to always process such ranges - even if there are no rows in
large parts of the table (e.g. after a bulk DELETE), it would still
require reading the pages etc. So we store them, but ignore them when
building the bitmap.
Backpatch to 11. The issue exists since BRIN indexes were introduced in
9.5, but older releases are already EOL.
Backpatch-through: 11
Reviewed-by: Justin Pryzby, Matthias van de Meent, Alvaro Herrera
Discussion: https://postgr.es/m/402430e4-7d9d-6cf1-09ef-464d80afff3b@enterprisedb.com
When merging BRIN summaries, union_tuples() did not correctly update the
target hasnulls/allnulls flags. When merging all-NULL summary into a
summary without any NULL values, the result had both flags set to false
(instead of having hasnulls=true).
This happened because the code only considered the hasnulls flags,
ignoring the possibility the source summary has allnulls=true.
Discovered while investigating issues with handling empty BRIN ranges
and handling of NULL values, but it's a separate problem (has nothing to
do with empty ranges).
Fixed by considering both flags on the source summary, and updating the
hasnulls flag on the target summary.
Backpatch to 11. The bug exists since 9.5 (where BRIN indexes were
introduced), but those releases are EOL already.
Discussion: https://postgr.es/m/9d993d0d-e431-2196-9ccc-0554d0e60154%40enterprisedb.com
The problem that these messages protect against can only occur because
a corrupted hash spill file was written, i.e., a Postgres bug. There's
no reason to have them as translatable.
Backpatch to 15, where these messages were changed by commit c4649cce39a4.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/20230510175407.dwa5v477pw62ikyx@alvherre.pgsql
Don't use PSQL_WATCH_PAGER when stdin/stdout are not a terminal.
This corresponds to the restrictions on when other commands will
use [PSQL_]PAGER. There isn't a lot of sense in trying to use a
pager in non-interactive cases, and doing so allows an environment
setting to break our tests.
Also, ignore PSQL_WATCH_PAGER if it is set but empty or all-blank,
for the same reasons we ignore such settings of [PSQL_]PAGER (see
commit 18f8f784c).
No documentation change is really needed, since there is nothing
suggesting that these constraints on [PSQL_]PAGER didn't already
apply to PSQL_WATCH_PAGER too. But I rearranged the text
a little to make it read more naturally (IMHO anyway).
Per report from Pavel Stehule. Back-patch to v15 where
PSQL_WATCH_PAGER was introduced.
Discussion: https://postgr.es/m/CAFj8pRDTwFzmEWdA-gdAcUh0ZnxUioSfTMre71WyB_wNJy-8gw@mail.gmail.com
There was some odd wording in corner-case gram.y error messages "some
error ... at or near", which appears to have been modeled after "syntax
error" messages. However, they don't work that way, and they're just
wrong. They're also uncovered by tests. Remove the trailing words,
and also add tests.
They were introduced with 5a2832465fd8; backpatch to 15.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
An update of the GUC stats_fetch_consistency in a transaction would be
able to trigger an assertion when doing cache->snapshot. In this case,
when retrieving a pgstat entry after the switch, a new snapshot would be
rebuilt, confusing pgstat_build_snapshot() because a snapshot is already
cached with an unexpected mode ("cache").
In order to fix this problem, this commit adds a flag to force a
snapshot clear each time this GUC is changed. Some tests are added to
check, while on it.
Some optimizations in avoiding the snapshot clear should be possible
depending on what is cached and the current GUC value, I guess, but this
solution is simple, and ensures that the state of the cache is updated
each time a new pgstat entry is fetched, hence being consistent with the
level wanted by the client that has set the GUC.
Note that cache->none and snapshot->none would not cause issues, as
fetching a pgstat entry would be retrieved from shared memory on the
second attempt, however a snapshot would still be cached. Similarly,
none->snapshot and none->cache would build a new snapshot on the second
fetch attempt. Finally, snapshot->cache would cache a new snapshot on
the second attempt.
Reported-by: Alexander Lakhin
Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/17804-2a118cd046f2d0e5@postgresql.org
backpatch-through: 15
If an SRF in the FROM clause references a table having row-level
security policies, and we inline that SRF into the calling query,
we neglected to mark the plan as potentially dependent on which
role is executing it. This could lead to later executions in the
same session returning or hiding rows that should have been hidden
or returned instead.
Our thanks to Wolfgang Walther for reporting this problem.
Stephen Frost and Tom Lane
Security: CVE-2023-2455
The two methods don't cooperate, so set_config_option("search_path",
...) has been ineffective under non-empty overrideStack. This defect
enabled an attacker having database-level CREATE privilege to execute
arbitrary code as the bootstrap superuser. While that particular attack
requires v13+ for the trusted extension attribute, other attacks are
feasible in all supported versions.
Standardize on the combination of NewGUCNestLevel() and
set_config_option("search_path", ...). It is newer than
PushOverrideSearchPath(), more-prevalent, and has no known
disadvantages. The "override" mechanism remains for now, for
compatibility with out-of-tree code. Users should update such code,
which likely suffers from the same sort of vulnerability closed here.
Back-patch to v11 (all supported versions).
Alexander Lakhin. Reported by Alexander Lakhin.
Security: CVE-2023-2454
This wait event was documented as "CommitTsBuffer" since its
introduction, but the code named it "CommitTSBuffer". This commit fixes
the code to follow the term documented, which is also more consistent
with the naming of the other wait events used for commit timestamps.
Introduced by 5da1493.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/e8c38840-596a-83d6-bd8d-cebc51111572@gmail.com
Backpatch-through: 13
Commit 153e215677 added the portlock directory. This is created in
$ENV{top_builddir} if it is set. Under PGXS, top_builddir points into
the installation directory, which is not necessarily writable and in
any case inappropriate to use by a test suite. The cause of the
problem is that the prove_installcheck target in Makefile.global
exports top_builddir, which isn't useful (since no other Perl code
actually reads it) and breaks this use case. The reason this code is
there is probably that is has been dragged around with various other
changes, in particular a0fc813266, but without a real purpose of its
own. By just removing the exporting of top_builddir in
prove_installcheck, the portlock directory then ends up under
tmp_check in the build directory, which is more suitable.
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://www.postgresql.org/message-id/78d1cfa6-0065-865d-584b-cde6d8c18aff@enterprisedb.com
If we exit a PG_TRY block early via "continue", "break", "goto", or
"return", we'll skip unwinding its exception stack. This change
moves a couple of such "return" statements in PL/Python out of
PG_TRY blocks. This was introduced in d0aa965c0a and affects all
supported versions.
We might also be able to add compile-time checks to prevent
recurrence, but that is left as a future exercise.
Reported-by: Mikhail Gribkov, Xing Guo
Author: Xing Guo
Reviewed-by: Michael Paquier, Andres Freund, Tom Lane
Discussion: https://postgr.es/m/CAMEv5_v5Y%2B-D%3DCO1%2Bqoe16sAmgC4sbbQjz%2BUtcHmB6zcgS%2B5Ew%40mail.gmail.com
Discussion: https://postgr.es/m/CACpMh%2BCMsGMRKFzFMm3bYTzQmMU5nfEEoEDU2apJcc4hid36AQ%40mail.gmail.com
Backpatch-through: 11 (all supported versions)
These functions incautiously fetched the array's first lower bound
even when the array is zero-dimensional, thus fetching the word
after the allocated array space. While almost always harmless,
with very bad luck this could result in SIGSEGV. Fix by adding
an early exit for empty input.
Per bug #17920 from Alexander Lakhin.
Discussion: https://postgr.es/m/17920-f7c228c627b6d02e%40postgresql.org