For each Index AM, provide a mapping between operator strategies and
the system-wide generic concept of a comparison type. For example,
for btree, BTLessStrategyNumber maps to and from COMPARE_LT. Numerous
places in the planner and executor think directly in terms of btree
strategy numbers (and a few in terms of hash strategy numbers.) These
should be converted over subsequent commits to think in terms of
CompareType instead. (This commit doesn't make any use of this API
yet.)
Author: Mark Dilger <mark.dilger@enterprisedb.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
Consistently use "Size" (or size_t, or in some places int64 or double)
as the type for variables holding memory allocation sizes. In most
places variables' data types were fine already, but we had an ancient
habit of computing bytes from kilobytes-units GUCs with code like
"work_mem * 1024L". That risks overflow on Win64 where they did not
make "long" as wide as "size_t". We worked around that by restricting
such GUCs' ranges, so you couldn't set work_mem et al higher than 2GB
on Win64. This patch removes that restriction, after replacing such
calculations with "work_mem * (Size) 1024" or variants of that.
It should be noted that this patch was constructed by searching
outwards from the GUCs that have MAX_KILOBYTES as upper limit.
So I can't positively guarantee there are no other places doing
memory-size arithmetic in int or long variables. I do however feel
pretty confident that increasing MAX_KILOBYTES on Win64 is safe now.
Also, nothing in our code should be dealing in multiple-gigabyte
allocations without authorization from a relevant GUC, so it seems
pretty likely that this search caught everything that could be at
risk of overflow.
Author: Vladlen Popolitov <v.popolitov@postgrespro.ru>
Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1a01f0-66ec2d80-3b-68487680@27595217
Oversight in commit 1bd4bc85, which made nbtree backwards scans operate
off of a copy of each page's left link as of the time of its call to
_bt_readpage.
Move nbtree's detection of RowCompare quals that are unsatisfiable due
to having a NULL in their first row element: rather than detecting these
cases at the point where _bt_first builds its insertion scan key, do so
earlier, during preprocessing proper. This brings the RowCompare case
in line every other case involving an unsatisfiable-due-to-NULL qual.
nbtree now consistently detects such unsatisfiable quals -- even when
they happen to involve a key that isn't examined by _bt_first at all.
Affected cases thereby avoid useless full index scans that cannot
possibly return any matching rows.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Matthias van de Meent <boekewurm+postgres@gmail.com>
Discussion: https://postgr.es/m/CAH2-WzmySVXst2hFrOATC-zw1Byg1XC-jYUS314=mzuqsNwk+Q@mail.gmail.com
Per git log, the last time someone tried to do something with
pgrminclude was around 2011. Many (not all) of the "pgrminclude
ignore" annotations are of a newer date but seem to have just been
copied around during refactorings and file moves and don't seem to
reflect an actual need anymore.
There have been some parallel experiments with include-what-you-use
(IWYU) annotations, but these don't seem to correspond very strongly
to pgrminclude annotations, so there is no value in keeping the
existing ones even for that kind of thing.
So, wipe them all away. We can always add new ones in the future
based on actual needs.
Discussion: https://www.postgresql.org/message-id/flat/2d4dc7b2-cb2e-49b1-b8ca-ba5f7024f05b%40eisentraut.org
Reset btpo_cycleid to 0 in btree_xlog_vacuum for consistency with
_bt_delitems_vacuum (the corresponding original execution code). This
makes things neater.
There might be some performance benefit to being consistent like this.
When btvacuumpage doesn't call _bt_delitems_vacuum, it can still
proactively reset btpo_cycleid to 0 via a separate hint-like update
mechanism (it does so whenever it sees that it isn't already set to 0).
And so it's possible that being consistent about resetting btpo_cycleid
like this will save work later on, after standby promotion: subsequent
VACUUMs won't need to clear btpo_cycleid using the hint-like update
mechanism as often as they otherwise would.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: https://postgr.es/m/CAH2-Wz=+LDFxn9NZyEsCo8ifcyKt6+n-VLyygySEHgMz+oynqw@mail.gmail.com
The new compact_attrs array stores a few select fields from
FormData_pg_attribute in a more compact way, using only 16 bytes per
column instead of the 104 bytes that FormData_pg_attribute uses. Using
CompactAttribute allows performance-critical operations such as tuple
deformation to be performed without looking at the FormData_pg_attribute
element in TupleDesc which means fewer cacheline accesses.
For some workloads, tuple deformation can be the most CPU intensive part
of processing the query. Some testing with 16 columns on a table
where the first column is variable length showed around a 10% increase in
transactions per second for an OLAP type query performing aggregation on
the 16th column. However, in certain cases, the increases were much
higher, up to ~25% on one AMD Zen4 machine.
This also makes pg_attribute.attcacheoff redundant. A follow-on commit
will remove it, thus shrinking the FormData_pg_attribute struct by 4
bytes.
Author: David Rowley
Reviewed-by: Andres Freund, Victor Yegorov
Discussion: https://postgr.es/m/CAApHDvrBztXP3yx=NKNmo3xwFAFhEdyPnvrDg3=M0RhDs+4vYw@mail.gmail.com
Consistently reset so->scanBehind at the beginning of nbtree array
advancement, even during sktrig_required=false calls (calls where array
advancement is triggered by an unsatisfied non-required array scan key).
Otherwise, it's possible for queries to fail to return all relevant
tuples to the scan given a low-order required scan key that was
previously deemed "satisfied" by a truncated high key attribute value.
This only happened at the point where a later non-required array scan
key needed to be "advanced" once on the next leaf page (that is, once
the right sibling of the truncated high key page was reached).
The underlying issue was that later code within _bt_advance_array_keys
assumed that the so->scanBehind flag must have been set using the
current page's high key (not the previous page's high key). Any later
successful recheck call to _bt_check_compare would therefore spuriously
be prevented from making _bt_advance_array_keys return true, based on
the faulty belief that the truncated attribute must be from the scan's
current tuple (i.e. the non-pivot tuple at the start of the next page).
_bt_advance_array_keys would return false for the tuple, ultimately
resulting in _bt_checkkeys failing to return a matching tuple.
Oversight in commit 5bf748b8, which enhanced nbtree ScalarArrayOp
execution.
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-WzkJKncfqyAUTeuB5GgRhT1vhsWO2q11dbZNqKmvjopP_g@mail.gmail.com
Backpatch: 17-, where commit 5bf748b8 first appears.
Spell out how a = key associated with a SAOP array renders a > key
against the same index column redundant at the relevant point inside
_bt_preprocess_keys.
Follow-up to commit 5bf748b8.
When _bt_readnextpage is called with our nbtree parallel scan already
seized (i.e. when it is directly called by _bt_first), we never expect a
prior call to _bt_readpage for lastcurrblkno to already indicate that
the scan should end -- the _bt_first caller's blkno must always be read.
After all, the "prior" _bt_readpage call (the call for lastcurrblkno)
probably took place in some other backend (and it might not even have
finished by the time our backend reaches _bt_first/_bt_readnextpage).
Add a documenting assertion to the path where _bt_readnextpage ends the
parallel scan based on information about lastcurrblkno from so->currPos.
Assert that the most recent _bt_readpage call that set so->currPos is in
fact lastcurrblkno's _bt_readpage call.
Follow-up to bugfix commit b5ee4e52.
Commit 1bd4bc85, which refactored nbtree sibling link traversal, made
_bt_parallel_seize reset the scan's currPos so that things were
consistent with the state of a serial backend moving between pages.
This overlooked the fact that _bt_readnextpage relied on the existing
currPos state to decide when to end the scan -- even though it came from
before the scan was seized. As a result of all this, parallel nbtree
scans could needlessly behave like full index scans.
To fix, teach _bt_readnextpage to explicitly allow the use of an already
read page's so->currPos when deciding whether to end the scan -- even
during parallel index scans (allow it consistently now). This requires
moving _bt_readnextpage's seizure of the scan to earlier in its loop.
That way _bt_readnextpage either deals with the true so->currPos state,
or an initialized-by-_bt_parallel_seize currPos state set from when the
scan was seized. Now _bt_steppage (the most important _bt_readnextpage
caller) takes the same uniform approach to setting up its call using
details taken from so->currPos -- regardless of whether the scan happens
to be parallel or serial.
The new loop structure in _bt_readnextpage is prone to getting confused
by P_NONE blknos set when the rightmost or leftmost page was reached.
We could avoid that by adding an explicit check, but that would be ugly.
Avoid this problem by teaching _bt_parallel_seize to end the parallel
scan instead of returning a P_NONE next block/blkno. Doing things this
way was arguably a missed opportunity for commit 1bd4bc85. It allows us
to remove a similar "blkno == P_NONE" check from _bt_first.
Oversight in commit 1bd4bc85, which refactored sibling link traversal
(as part of optimizing nbtree backward scan locking).
Author: Peter Geoghegan <pg@bowt.ie>
Reported-By: Masahiro Ikeda <ikedamsh@oss.nttdata.com>
Diagnosed-By: Masahiro Ikeda <ikedamsh@oss.nttdata.com>
Reviewed-By: Masahiro Ikeda <ikedamsh@oss.nttdata.com>
Discussion: https://postgr.es/m/f8efb9c0f8d1a71b44fd7f8e42e49c25@oss.nttdata.com
_bt_first doesn't necessarily hold onto a buffer pin on success exit.
Fix header comments that claimed that we'll always hold onto a pin.
Oversight in commit 2ed5b87f96.
Remove a local variable that was used to avoid overwriting strat_total
with the = operator strategy when a >= operator strategy key was already
included in the initial positioning/insertion scan keys by _bt_first
(for backwards scans it would have to be a <= key that was included).
_bt_first's strat_total local variable now simply tracks the operator
strategy of the final scan key that was included in the scan's insertion
scan key (barring the case where the !used_all_subkeys row compare path
adjusts strat_total in its own way).
_bt_first already treated >= keys (or <= keys) as = keys for initial
positioning purposes. There is no good reason to remember that that was
what happened; no later _bt_first step cares about the distinction.
Note, in particular, that the insertion scan key's 'nextkey' and
'backward' fields will be initialized the same way regardless.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Tomas Vondra <tomas@vondra.me>
Discussion: https://postgr.es/m/CAH2-Wz=PKR6rB7qbx+Vnd7eqeB5VTcrW=iJvAsTsKbdG+kW_UA@mail.gmail.com
_bt_endpoint is a helper function for _bt_first that's called whenever
no useful insertion scan key can be used, and we need to lock and read
either the leftmost or rightmost leaf page in the index. Simplify and
document its preconditions, relieving its _bt_first caller from having
to end the parallel scan when it returns false.
Also stop unnecessarily invalidating the current scan position in nearby
code in both _bt_first and _bt_endpoint. This seems to have been
copy-pasted from _bt_readnextpage, where invalidating the scan's current
position really is necessary.
Follow-up to the refactoring work in commit 1bd4bc85.
Follow-up to bugfix commit 763d65ae. Technically this new assertion is
redundant with the assertion recently added to _bt_readpage by that same
commit, but it seems like a good idea to have both.
The new assertion makes it clear that we expect to call _bt_readnextpage
when there's another primitive index scan scheduled, though only when
needed as the final step of ending the current primitive scan.
Strictly speaking, we only need to make sure to leave the scan's array
keys in their final positions (final for the current scan direction) to
handle SAOP array exhaustion because btgettuple might only return a
subset of the items for the final page (final for the current scan
direction), before the scan changes direction. While it's typical for
so->currPos to be invalidated shortly after the scan's arrays are first
exhausted, and while so->currPos invalidation does obviate the need to
leave the scan's arrays in any particular state, we can't rely on any of
that actually happening when handling array exhaustion. Adjust comments
to make all of that a lot clearer.
Oversight in commit 5bf748b8, which enhanced nbtree ScalarArrayOp
execution.
A bug in nbtree's handling of primitive index scan scheduling could lead
to wrong answers when a scrollable cursor was used with an index scan
that had a SAOP index qual. Wrong answers were only possible when the
scan direction changed after a primitive scan was scheduled, but before
_bt_next was asked to fetch the next tuple in line (i.e. for things to
break, _bt_next had to be denied the opportunity to step off the page in
the same direction as the one used when the primscan was scheduled).
Furthermore, the issue only occurred when the page in question happened
to be the first page to be visited by the entire top-level scan; the
issue hinged upon the cursor backing up to the absolute beginning of the
key space that it returns tuples from (fetching in the opposite scan
direction across a "primitive scan boundary" always worked correctly).
To fix, make _bt_next unset the "needs primitive index scan" flag when
it detects that the current scan direction is not the one that was used
by _bt_readpage back when the primitive scan in question was scheduled.
This fixes the cases that are known to be faulty, and also seems like a
good idea on general robustness grounds.
Affected scrollable cursor cases now avoid a spurious primitive index
scan when they fetch backwards to the absolute start of the key space to
be visited by their cursor. Fetching backwards now only returns those
tuples at the start of the scan, as expected. It'll also be okay to
once again fetch forwards from the start at that point, since the scan
will be left in a state that's exactly consistent with the state it was
in before any tuples were ever fetched, as expected.
Oversight in commit 5bf748b8, which enhanced nbtree ScalarArrayOp
execution.
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wznv49bFsE2jkt4GuZ0tU2C91dEST=50egzjY2FeOcHL4Q@mail.gmail.com
Backpatch: 17-, where commit 5bf748b8 first appears.
Tweak some code comments for clarity, and relocate some local variable
declarations to the scope where they're actually used.
Follow-up to recent commit 1bd4bc85.
Make nbtree backwards scans optimistically access the next page to be
read to the left by following a prevPage block number that's now stashed
in currPos when the leaf page is first read. This approach matches the
one taken during forward scans, which follow a symmetric nextPage block
number from currPos. We stash both a prevPage and a nextPage, since the
scan direction might change (when fetching from a scrollable cursor).
Backwards scans will no longer need to lock the same page twice, except
in rare cases where the scan detects a concurrent page split (or page
deletion). Testing has shown this optimization to be particularly
effective during parallel index-only backwards scans: ~12% reductions in
query execution time are quite possible.
We're much better off being optimistic; concurrent left sibling page
splits are rare in general. It's possible that we'll need to lock more
pages than the pessimistic approach would have, but only when there are
_multiple_ concurrent splits of the left sibling page we now start at.
If there's just a single concurrent left sibling page split, the new
approach to scanning backwards will at least break even relative to the
old one (we'll acquire the same number of leaf page locks as before).
The optimization from this commit has long been contemplated by comments
added by commit 2ed5b87f96, which changed the rules for locking/pinning
during nbtree index scans. The approach that that commit introduced to
leaf level link traversal when scanning forwards is now more or less
applied all the time, regardless of the direction we're scanning in.
Following uniform conventions around sibling link traversal is simpler.
The only real remaining difference between our forward and backwards
handling is that our backwards handling must still detect and recover
from any concurrent left sibling splits (and concurrent page deletions),
as documented in the nbtree README. That is structured as a single,
isolated extra step that takes place in _bt_readnextpage.
Also use this opportunity to further simplify the functions that deal
with reading pages and traversing sibling links on the leaf level, and
to document their preconditions and postconditions (with respect to
things like buffer locks, buffer pins, and seizing the parallel scan).
This enhancement completely supersedes the one recently added by commit
3f44959f.
Author: Matthias van de Meent <boekewurm+postgres@gmail.com>
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAEze2WgpBGRgTTxTWVPXc9+PB6fc1a7t+VyGXHzfnrFXcQVxnA@mail.gmail.com
Discussion: https://postgr.es/m/CAH2-WzkBTuFv7W2+84jJT8mWZLXVL0GHq2hMUTn6c9Vw=eYrCw@mail.gmail.com
Commit 5bf748b8 taught nbtree ScalarArrayOp index scans to decide when
and how to start the next primitive index scan based on physical index
characteristics. This included rules for deciding whether to start a
new primitive index scan (or whether to move onto the right sibling leaf
page instead) that specifically consider truncated lower-order columns
(-inf columns) from leaf page high keys.
These omitted columns were treated as satisfying the scan's required
scan keys, though only for scan keys marked required in the current scan
direction (forward). Scan keys that didn't get this behavior (those
marked required in the backwards direction only) usually didn't give the
scan reasonable cause to reposition itself to a later leaf page (via
another descent of the index in _bt_first), but _bt_advance_array_keys
would nevertheless always give up by forcing another call to _bt_first.
_bt_advance_array_keys was unwilling to allow the scan to continue onto
the next leaf page, to reconsider whether we really should start another
primitive scan based on the details of the sibling page's tuples. This
didn't match its behavior with similar cases involving keys required in
the current scan direction (forward), which seems unprincipled. It led
to an excessive number of primitive scans/index descents for queries
with a higher-order = array scan key (with dense, contiguous values)
mixed with a lower-order required > or >= scan key.
Bring > and >= strategy scan keys in line with other required scan key
types: treat truncated -inf scan keys as having satisfied scan keys
required in either scan direction (forwards and backwards alike) during
array advancement. That way affected scans can continue to the right
sibling leaf page. Advancement must now schedule an explicit recheck of
the right sibling page's high key in cases involving > or >= scan keys.
The recheck gives the scan a way to back out and start another primitive
index scan (we can't just rely on _bt_checkkeys with > or >= scan keys).
This work can be considered a stand alone optimization on top of the
work from commit 5bf748b8. But it was written in preparation for an
upcoming patch that will add skip scan to nbtree. In practice scans
that use "skip arrays" will tend to be much more sensitive to any
implementation deficiencies in this area.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Tomas Vondra <tomas@vondra.me>
Discussion: https://postgr.es/m/CAH2-Wz=9A_UtM7HzUThSkQ+BcrQsQZuNhWOvQWK06PRkEp=SKQ@mail.gmail.com
All these code paths use their own entry point when starting parallel
workers, but failed to set a query ID, even if they set a text query.
Hence, this data would be missed in pg_stat_activity for the worker
processes. The main entry point for parallel query processing,
ParallelQueryMain(), is already doing that by saving its query ID in a
dummy PlannedStmt, but not the others. The code is changed so as the
query ID of these queries is set in their shared state, and reported
back once the parallel workers start.
Some tests are added to show how the failures can happen for btree and
BRIN with a parallel build enforced, which are able to trigger a failure
in an assertion added by 24f5205948 in the recovery TAP test
027_stream_regress.pl where pg_stat_statements is always loaded. In
this case, the executor path was taken because the index expression
needs to be flattened when building its IndexInfo.
Alexander Lakhin has noticed the problem in btree, and I have noticed
that the issue was more spread. This is arguably a bug, but nobody has
complained about that until now, so no backpatch is done out of caution.
If folks would like to see a backpatch, well, let me know.
Reported-by: Alexander Lakhin
Reviewed-by: Sami Imseih
Discussion: https://postgr.es/m/cf3547c1-498a-6a61-7b01-819f902a251f@gmail.com
The array->scan_key references fixed up at the end of preprocessing
start out as offsets into the arrayKeyData[] array (the array returned
by _bt_preprocess_array_keys at the start of preprocessing that involves
array scan keys). Offsets into the arrayKeyData[] array are no longer
guaranteed to be valid offsets into our original scan->keyData[] input
scan key array, but comments describing the array->scan_key references
still talked about scan->keyData[]. Update those comments.
Oversight in commit b5249741.
Teach _bt_preprocess_array_keys to eliminate redundant array equality
scan keys directly, rather than just marking them as redundant. Its
_bt_preprocess_keys caller is no longer required to ignore input scan
keys that were marked redundant in this way. Oversights like the one
fixed by commit f22e17f7 are no longer possible.
The new scheme also makes it easier for _bt_preprocess_keys to output a
so.keyData[] scan key array with _more_ scan keys than it was passed in
its scan.keyData[] input scan key array. An upcoming patch that adds
skip scan optimizations to nbtree will take advantage of this.
In passing, remove and rename certain _bt_preprocess_keys variables to
make the difference between our input scan key array and our output scan
key array clearer.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Tomas Vondra <tomas@vondra.me>
Discussion: https://postgr.es/m/CAH2-Wz=9A_UtM7HzUThSkQ+BcrQsQZuNhWOvQWK06PRkEp=SKQ@mail.gmail.com
Commit 5bf748b8, which enhanced nbtree ScalarArrayOp execution, made
parallel index scans work with the new design for arrays via explicit
scheduling of primitive index scans. Under this scheme a parallel index
scan with array keys will perform the same number of index descents as
an equivalent serial index scan (barring corner cases where an
individual parallel worker discovers that it can advance the scan's
array keys without anybody needing to perform another descent of the
index to get to the relevant page on the leaf level).
Despite all this, the pgstats accounting wasn't updated; it continued to
increment the total number of index scans for the rel once per _bt_first
call, no matter the details. As a result, the number of (primitive)
index scans could be over-counted during parallel scans.
To fix, delay incrementing the count of index scans until after we've
established that another descent of the index (using either _bt_search
or _bt_endpoint) is required. That way pg_stat_user_tables.idx_scan
always advances in the same way, regardless of whether or not the scan
makes use of parallelism.
Oversight in commit 5bf748b8, which enhanced nbtree ScalarArrayOp
execution.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Tomas Vondra <tomas@vondra.me>
Discussion: https://postgr.es/m/CAH2-Wz=E7XrkvscBN0U6V81NK3Q-dQOmivvbEsjG-zwEfDdFpg@mail.gmail.com
Discussion: https://postgr.es/m/CAH2-WzkRqvaqR2CTNqTZP0z6FuL4-3ED6eQB0yx38XBNj1v-4Q@mail.gmail.com
Backpatch: 17-, where nbtree SAOP execution was enhanced.
Commit 5bf748b8, which enhanced nbtree ScalarArrayOp execution, made
parallel index scans work with the new design for arrays via explicit
scheduling of primitive index scans. A backend that successfully
scheduled the scan's next primitive index scan saved its backend local
array keys in shared memory. Any backend could pick up the scheduled
primitive scan within _bt_first. This scheme decouples scheduling a
primitive scan from starting the scan (by performing another descent of
the index via a _bt_search call from _bt_first) to make things robust.
The scheme had a deadlock hazard, at least when the leader process
participated in the scan. _bt_parallel_seize had a code path that made
backends that were not in an immediate position to start a scheduled
primitive index scan wait for some other backend to do so instead.
Under the right circumstances, the leader process could wait here
forever: the leader would wait for any other backend to start the
primitive scan, while every worker was busy waiting on the leader to
consume tuples from the scan's tuple queue.
To fix, don't wait for a scheduled primitive index scan to be started by
some other eligible backend from within _bt_parallel_seize (when the
calling backend isn't in a position to do so itself). Return false
instead, while recording that the scan has a scheduled primitive index
scan in backend local state. This leaves the backend in the same state
as the existing case where a backend schedules (or tries to schedule)
another primitive index scan from within _bt_advance_array_keys, before
calling _bt_parallel_seize. _bt_parallel_seize already handles that
case by returning false without waiting, and without unsetting the
backend local state. Leaving the backend in this state enables it to
start a previously scheduled primitive index scan once it gets back to
_bt_first.
Oversight in commit 5bf748b8, which enhanced nbtree ScalarArrayOp
execution.
Matthias van de Meent, with tweaks by me.
Author: Matthias van de Meent <boekewurm+postgres@gmail.com>
Reported-By: Tomas Vondra <tomas@vondra.me>
Reviewed-By: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-WzmMGaPa32u9x_FvEbPTUkP5e95i=QxR8054nvCRydP-sw@mail.gmail.com
Backpatch: 17-, where nbtree SAOP execution was enhanced.
The index access methods all had similar code that copied the
passed-in scan keys to local storage. They all used memmove() for
that, which is not wrong, but it seems confusing not to use memcpy()
when that would work. Presumably, this was all once copied from
ancient code and never adjusted.
Discussion: https://www.postgresql.org/message-id/flat/f8c739d9-f48d-4187-b214-df3391ba41ab@eisentraut.org
Add bounds checking to nbtree's lookahead/skip-within-a-page mechanism.
Otherwise it's possible for cases with lots of before-array-keys tuples
to overflow an int16 variable, causing the mechanism to generate an out
of bounds page offset number.
Oversight in commit 5bf748b8, which enhanced nbtree ScalarArrayOp
execution.
Reported-By: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/6c68ac42-bbb5-8b24-103e-af0e279c536f@gmail.com
Backpatch: 17-, where nbtree SAOP execution was enhanced.
Teach nbtree backwards scans to avoid relocking a just-read leaf page to
read its current left sibling link when it isn't truly necessary. This
happened inside _bt_readnextpage whenever _bt_readpage had already
determined that there'll be no further matches to the left (or at least
none for the current primitive index scan, for a scan with array keys).
A new precheck inside _bt_readnextpage is all that we need to avoid
these useless lock acquisitions. Arguably, using a precheck like this
was a missed opportunity for commit 2ed5b87f96, which taught nbtree to
drop leaf page pins early to avoid blocking cleanup by VACUUM. Forwards
scans already managed to avoid relocking the page like this.
The optimization added by this commit is particularly helpful with
backwards scans that use array keys where the scan must perform multiple
primitive index scans. Such backwards scans will now avoid a useless
leaf page re-lock at the end of each primitive index scan.
Note that this commit does not attempt to avoid needlessly re-locking a
leaf page that was just read when the scan must follow the leaf page's
left link. That more ambitious optimization could work by stashing the
left link when the page is first read by a backwards scan, allowing the
subsequent _bt_readnextpage call to optimistically skip re-reading the
original page just to get a new copy of its left link. For now we only
address cases where we don't care about our original page's left link.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Matthias van de Meent <boekewurm+postgres@gmail.com>
Discussion: https://postgr.es/m/CAH2-Wz=xgs7PojG=EUvhgadwENzu_mY_riNh-w9wFPsaS717ew@mail.gmail.com
_bt_advance_array_keys didn't take sufficient care at the point where it
decides whether to start a new primitive index scan based on a call to
_bt_check_compare against finaltup (a call with the scan direction
flipped around). The final decision was conditioned on rules about how
the scan key offset sktrig that initially triggered array advancement
(passed to _bt_advance_array_keys from its _bt_checkkeys caller)
compared to the offset set by its own _bt_check_compare finaltup call.
This approach was faulty, in that it allowed _bt_advance_array_keys to
incorrectly start a new primitive index scan, that landed on the same
leaf page (on assert-enabled builds it led to an assertion failure).
In general, scans with array keys are expected to never have to read the
same leaf page more than once (barring cases involving cursors, and
cases where the scan restores a marked position for the inner side of a
merge join). This principle was established by commit 5bf748b8.
To fix, make the final decision based on whether the scan key offset set
by the _bt_check_compare finaltup call is an offset to an inequality
strategy scan key. An unsatisfied required inequality strategy scan key
indicates that all of the scan's required equality strategy scan keys
must also be satisfied by finaltup (not just by caller's tuple), and
that there is a decent chance that _bt_first will be able to reposition
the scan to a position many leaf pages ahead of the current leaf page.
Oversight in commit 5bf748b8.
Discussion: https://postgr.es/m/CAH2-Wz=DyHbcg7o6zXqzyiin8WE8vzk4tvU8Lrnh-a=EAvO0TQ@mail.gmail.com
Certain cases involving the use of cursors had assertion failures within
_bt_preprocess_keys's recently added no-op return path. The assertion
in question made the faulty assumption that a second or third call to
_bt_preprocess_keys (within the same btrescan) could only happen when
another scheduled primitive index scan was just about to begin.
It would be possible to address the problem by only allowing scans that
have array keys to take the new no-op path, forcing affected cases to
perform redundant preprocessing work. It seems simpler to just remove
the assertion, and reframe the no-op path as a more general mechanism.
Take this simpler approach.
The important underlying principle is that we only need to perform
preprocessing once per btrescan (at most). This is expected regardless
of whether or not the scan happens to have array keys.
Oversight in commit 1b134ca5, which enhanced nbtree ScalarArrayOp
execution.
Reported-By: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/ef0f7c8b-a6fa-362e-6fd6-054950f947ca@gmail.com