1
0
mirror of https://github.com/postgres/postgres.git synced 2025-06-27 23:21:58 +03:00

Make _bt_killitems drop pins it acquired itself.

Teach nbtree's _bt_killitems to leave the so->currPos page that it sets
LP_DEAD items on in whatever state it was in when _bt_killitems was
called.  In particular, make sure that so->dropPin scans don't acquire a
pin whose reference is saved in so->currPos.buf.

Allowing _bt_killitems to change so->currPos.buf like this is wrong.
The immediate consequence of allowing it is that code in _bt_steppage
(that copies so->currPos into so->markPos) will behave as if the scan is
a !so->dropPin scan.  so->markPos will therefore retain the buffer pin
indefinitely, even though _bt_killitems only needs to acquire a pin
(along with a lock) for long enough to mark known-dead items LP_DEAD.

This issue came to light following a report of a failure of an assertion
from recent commit e6eed40e.  The test case in question involves the use
of mark and restore.  An initial call to _bt_killitems takes place that
leaves so->currPos.buf in a state that is inconsistent with the scan
being so->dropPin.  A subsequent call to _bt_killitems for the same
position (following so->currPos being saved in so->markPos, and then
restored as so->currPos) resulted in the failure of an assertion that
tests that so->currPos.buf is InvalidBuffer when the scan is so->dropPin
(non-assert builds got a "resource was not closed" WARNING instead).

The same problem exists on earlier releases, though the issue is far
more subtle there.  Recent commit e6eed40e introduced the so->dropPin
field as a partial replacement for testing so->currPos.buf directly.
Earlier releases won't get an assertion failure (or buffer pin leak),
but they will allow the second _bt_killitems call from the test case to
behave as if a buffer pin was consistently held since the original call
to _bt_readpage.  This is wrong; there will have been an initial window
during which no pin was held on the so->currPos page, and yet the second
_bt_killitems call will neglect to check if so->currPos.lsn continues to
match the page's now-current LSN.

As a result of all this, it's just about possible that _bt_killitems
will set the wrong items LP_DEAD (on release branches).  This could only
happen with merge joins (the sole user of nbtree mark/restore support),
when a concurrently inserted index tuple used a recently-recycled TID
(and only when the new tuple was inserted onto the same page as a
distinct concurrently-removed tuple with the same TID).  This is exactly
the scenario that _bt_killitems' check of the page's now-current LSN
against the LSN stashed in currPos was supposed to prevent.

A follow-up commit will make nbtree completely stop conditioning whether
or not a position's pin needs to be dropped on whether the 'buf' field
is set.  All call sites that might need to drop a still-held pin will be
taught to rely on the scan-level so->dropPin field recently introduced
by commit e6eed40e.  That will make bugs of the same general nature as
this one impossible (or make them much easier to detect, at least).

Author: Peter Geoghegan <pg@bowt.ie>
Reported-By: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/545be1e5-3786-439a-9257-a90d30f8b849@gmail.com
Backpatch-through: 13
This commit is contained in:
Peter Geoghegan
2025-06-11 09:17:35 -04:00
parent 361499538c
commit 7c319f5491
2 changed files with 28 additions and 21 deletions

View File

@ -417,6 +417,8 @@ btrescan(IndexScanDesc scan, ScanKey scankey, int nscankeys,
* way, so we might as well avoid wasting cycles on acquiring page LSNs.
*
* See nbtree/README section on making concurrent TID recycling safe.
*
* Note: so->dropPin should never change across rescans.
*/
so->dropPin = (!scan->xs_want_itup &&
IsMVCCSnapshot(scan->xs_snapshot) &&

View File

@ -3323,24 +3323,26 @@ _bt_checkkeys_look_ahead(IndexScanDesc scan, BTReadPageState *pstate,
* current page and killed tuples thereon (generally, this should only be
* called if so->numKilled > 0).
*
* The caller does not have a lock on the page and may or may not have the
* page pinned in a buffer. Note that read-lock is sufficient for setting
* LP_DEAD status (which is only a hint).
* Caller should not have a lock on the so->currPos page, but must hold a
* buffer pin when !so->dropPin. When we return, it still won't be locked.
* It'll continue to hold whatever pins were held before calling here.
*
* We match items by heap TID before assuming they are the right ones to
* delete. We cope with cases where items have moved right due to insertions.
* If an item has moved off the current page due to a split, we'll fail to
* find it and do nothing (this is not an error case --- we assume the item
* will eventually get marked in a future indexscan).
*
* Note that if we hold a pin on the target page continuously from initially
* reading the items until applying this function, VACUUM cannot have deleted
* any items on the page, so the page's TIDs can't have been recycled by now.
* There's no risk that we'll confuse a new index tuple that happens to use a
* recycled TID with a now-removed tuple with the same TID (that used to be on
* this same page). We can't rely on that during scans that drop pins eagerly
* We match items by heap TID before assuming they are the right ones to set
* LP_DEAD. If the scan is one that holds a buffer pin on the target page
* continuously from initially reading the items until applying this function
* (if it is a !so->dropPin scan), VACUUM cannot have deleted any items on the
* page, so the page's TIDs can't have been recycled by now. There's no risk
* that we'll confuse a new index tuple that happens to use a recycled TID
* with a now-removed tuple with the same TID (that used to be on this same
* page). We can't rely on that during scans that drop buffer pins eagerly
* (so->dropPin scans), though, so we must condition setting LP_DEAD bits on
* the page LSN having not changed since back when _bt_readpage saw the page.
* We totally give up on setting LP_DEAD bits when the page LSN changed.
*
* We give up much less often during !so->dropPin scans, but it still happens.
* We cope with cases where items have moved right due to insertions. If an
* item has moved off the current page due to a split, we'll fail to find it
* and just give up on it.
*/
void
_bt_killitems(IndexScanDesc scan)
@ -3353,6 +3355,7 @@ _bt_killitems(IndexScanDesc scan)
OffsetNumber maxoff;
int numKilled = so->numKilled;
bool killedsomething = false;
Buffer buf;
Assert(numKilled > 0);
Assert(BTScanPosIsValid(so->currPos));
@ -3369,11 +3372,11 @@ _bt_killitems(IndexScanDesc scan)
* concurrent VACUUMs from recycling any of the TIDs on the page.
*/
Assert(BTScanPosIsPinned(so->currPos));
_bt_lockbuf(rel, so->currPos.buf, BT_READ);
buf = so->currPos.buf;
_bt_lockbuf(rel, buf, BT_READ);
}
else
{
Buffer buf;
XLogRecPtr latestlsn;
Assert(!BTScanPosIsPinned(so->currPos));
@ -3391,10 +3394,9 @@ _bt_killitems(IndexScanDesc scan)
}
/* Unmodified, hinting is safe */
so->currPos.buf = buf;
}
page = BufferGetPage(so->currPos.buf);
page = BufferGetPage(buf);
opaque = BTPageGetOpaque(page);
minoff = P_FIRSTDATAKEY(opaque);
maxoff = PageGetMaxOffsetNumber(page);
@ -3511,10 +3513,13 @@ _bt_killitems(IndexScanDesc scan)
if (killedsomething)
{
opaque->btpo_flags |= BTP_HAS_GARBAGE;
MarkBufferDirtyHint(so->currPos.buf, true);
MarkBufferDirtyHint(buf, true);
}
_bt_unlockbuf(rel, so->currPos.buf);
if (!so->dropPin)
_bt_unlockbuf(rel, buf);
else
_bt_relbuf(rel, buf);
}